The office VPN works. Everyone’s happy. Until someone discovers that “VPN access” is basically “a private highway into the whole corporate network.”
Then you get the midnight message: “Why can the contractor reach the database?” and you realize you built a tunnel, not a guardrail.
The goal here is brutally simple: once a user connects to the VPN, they should be able to reach exactly one server on exactly one port.
Not “ideally.” Not “unless they know the subnet.” Exactly. Anything else is an accident waiting for a calendar invite.
What “one server/port only” really means
Least privilege over a VPN isn’t “users can only see the subnet they need.” That’s a polite fiction. Subnets are for routing; privileges are for security.
When you say “one server/port only,” you’re really saying:
- The VPN client receives an IP (or not), but routing alone is not trusted to enforce anything.
- The VPN gateway and/or target server enforces allow rules and defaults to deny.
- Every other destination and port is blocked, including “internal” services like DNS, SMB, SSH, RDP, and metrics endpoints.
- Observability exists: logs, counters, and a way to prove what was blocked and why.
The anti-pattern is “we only push one route.” That’s not least privilege; that’s a suggestion. If the client can manually add routes,
or if there’s NAT, or if there’s a second interface on the VPN box, you’ve got surprise connectivity.
A strong design usually involves two control planes:
(1) network controls (firewall/routing at the VPN entry), and
(2) service controls (firewall/app auth on the destination).
If you only do one, you’ll eventually learn why people do the other.
Interesting facts and a little history
- VPNs weren’t born as zero-trust tools. Early enterprise VPN thinking assumed “inside the tunnel” was “inside the office.” That assumption aged poorly.
- IPsec predates a lot of modern enterprise security habits. It came from an era where perimeter firewalls were the main story, not internal segmentation.
- Split tunneling has always been controversial. It reduces load on the corporate network, but it also creates mixed-trust endpoints (home Wi‑Fi + corp tunnel).
- Firewalls learned statefulness to solve real pain. Stateless ACLs made “allow TCP/443” tricky without also allowing reply traffic; stateful tracking fixed that.
- “VPN user equals network user” is a historical shortcut. It was operationally convenient when identity systems were simpler and remote work was the exception.
- WireGuard’s design is unusually small. Its minimalism made audits and deployment easier compared to sprawling VPN stacks, and that changed default choices.
- NAT became a security crutch. People treated NAT as isolation; it is not. It’s address translation, not authorization.
- Micro-segmentation didn’t start in the cloud. Data centers did it with VLANs and ACLs long before “zero trust” became a slide-deck noun.
Threat model: what you’re preventing (and what you’re not)
You’re trying to prevent lateral movement from a VPN client into the rest of the network. That includes:
- Port scanning internal ranges (“just checking what’s there”).
- Reaching admin surfaces: SSH, RDP, vCenter-like consoles, BMCs, NAS management, printers (yes), and anything with a web UI.
- Using internal DNS to discover names you didn’t mean to publish.
- Access to metadata services, internal registries, or monitoring endpoints that leak topology and credentials.
You are not solving:
- Compromised credentials for the one allowed service. (You still need strong auth there.)
- Malware on the endpoint that exfiltrates data through the allowed port. (You need DLP/monitoring and app-side controls.)
- Traffic inspection at a deep content level. (That’s a separate system, and it brings its own operational taxes.)
One quote to keep your posture honest. As Bruce Schneier put it: Security is a process, not a product.
You don’t “configure least privilege” once.
You keep it least-privileged while requirements creep and staff turns over.
Short joke #1: If you allow “any internal” from VPN, you’re not doing security—you’re doing remote office cosplay.
Three patterns that actually work
Pattern A: VPN gateway enforces a strict allowlist (most common)
The VPN concentrator (or the Linux host running WireGuard/OpenVPN) acts as a choke point. VPN clients land on a dedicated interface/subnet.
A firewall policy says: from VPN subnet → allow to target_ip:target_port, deny everything else.
Pros: simple mental model, central control, good logging. Cons: if the gateway is misconfigured or bypassed (dual-homed paths, hairpinning),
you can accidentally open more than you intended.
Pattern B: No general network access; publish a single service via a proxy
Instead of letting VPN clients reach the target server directly, you expose a service through a reverse proxy (for HTTP/HTTPS) or a TCP proxy
(for non-HTTP). The VPN client can only reach the proxy; the proxy is the only thing that can reach the target.
Pros: app-layer controls, better identity integration, easier auditing. Cons: you’re adding another hop and another thing to maintain.
Pattern C: Put the target in a dedicated “VPN service network”
Create an isolated segment (VLAN/VRF) where the target server lives, with strict ingress rules. VPN clients can route only into that segment,
and that segment can’t route back out broadly.
Pros: strong blast-radius reduction. Cons: more network engineering, and it’s easy to botch return-path routing and blame “the VPN.”
My opinionated take: Pattern A plus server-side restrictions is the baseline. If you can do Pattern B without making your app team cry, do it.
Pattern C is excellent when you have real network infrastructure and change control discipline.
Implementation: WireGuard + firewall (recommended)
WireGuard is popular because it’s straightforward to operate. But WireGuard itself is not an access-control engine.
AllowedIPs is primarily a routing/peer-selection concept, not a “security policy that can’t be bypassed.”
Treat it as helpful plumbing, then enforce policy with firewall rules where it matters.
Design: dedicated VPN subnet and a single allowed destination
- VPN interface:
wg0 - VPN subnet:
10.50.0.0/24 - Target server:
10.20.30.40 - Allowed port:
5432/tcp(example: Postgres)
WireGuard config (server)
Keep the server config boring. Boring is good. Boring means your future self doesn’t have to reverse-engineer “clever.”
cr0x@server:~$ sudo sed -n '1,120p' /etc/wireguard/wg0.conf
[Interface]
Address = 10.50.0.1/24
ListenPort = 51820
PrivateKey = (redacted)
# Optional: save config on runtime changes
SaveConfig = false
[Peer]
PublicKey = (redacted)
AllowedIPs = 10.50.0.10/32
Important: The peer’s AllowedIPs being 10.50.0.10/32 means “this peer owns that VPN IP.”
It does not magically prevent the peer from trying to reach 10.20.30.40 once the tunnel is up. That’s the firewall’s job.
Firewall policy with nftables (preferred on modern Linux)
The clean approach is: default deny from VPN → anywhere, then poke one exact hole. Also allow established/related return traffic.
And log drops at a rate limit so you don’t DoS your own disks.
cr0x@server:~$ sudo nft list ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
iif "lo" accept
ct state established,related accept
iif "wg0" tcp dport 22 accept
iif "eth0" tcp dport 51820 accept
ip protocol icmp accept
counter drop
}
chain forward {
type filter hook forward priority 0; policy drop;
ct state established,related accept
# Allow VPN clients to reach exactly one server/port
iif "wg0" ip saddr 10.50.0.0/24 ip daddr 10.20.30.40 tcp dport 5432 accept
# Optional: allow VPN clients to reach only the VPN gateway DNS forwarder
# iif "wg0" ip saddr 10.50.0.0/24 ip daddr 10.50.0.1 udp dport 53 accept
# iif "wg0" ip saddr 10.50.0.0/24 ip daddr 10.50.0.1 tcp dport 53 accept
# Log and drop everything else from VPN
iif "wg0" limit rate 10/second burst 20 log prefix "VPN-DROP " flags all counter drop
counter drop
}
chain output {
type filter hook output priority 0; policy accept;
}
}
What to notice:
- Forward chain policy drop: the VPN box is not a router “by default.” It’s a bouncer with a clipboard.
- The allow rule is fully qualified: interface, source subnet, destination IP, protocol, and port.
- We log drops from
wg0with a rate limit. Without rate limits, one bored intern withnmapbecomes your log-retention problem.
Routing and NAT: avoid “helpful” NAT unless you really need it
If your target server already routes back to the VPN subnet via the VPN gateway, you don’t need NAT. That’s cleaner and more auditable.
NAT can hide client identity from the target server and complicate server-side allowlists.
If you cannot change routes on the target side, you can use source NAT on the VPN gateway so the target replies to the gateway.
But treat this as a compromise and log it.
cr0x@server:~$ sudo nft list table ip nat
table ip nat {
chain postrouting {
type nat hook postrouting priority 100; policy accept;
oif "eth0" ip saddr 10.50.0.0/24 ip daddr 10.20.30.40 tcp dport 5432 masquerade
}
}
Decision: if you see masquerade rules that apply to broad destinations (like “any”), tighten them. NAT is not a permission slip.
Implementation: OpenVPN + firewall (still common)
OpenVPN is battle-tested and everywhere. The same principle applies: OpenVPN pushes routes; firewall enforces access.
OpenVPN also supports per-client config (CCD), which people use as if it’s segmentation. It can help, but it’s not a replacement for firewall policy.
OpenVPN server config shape
cr0x@server:~$ sudo sed -n '1,160p' /etc/openvpn/server/server.conf
port 1194
proto udp
dev tun0
server 10.60.0.0 255.255.255.0
topology subnet
persist-key
persist-tun
keepalive 10 60
user nobody
group nogroup
# Don't push broad routes unless you mean it
;push "route 10.0.0.0 255.0.0.0"
# Optionally push a single host route (still not a security boundary)
push "route 10.20.30.40 255.255.255.255"
client-config-dir /etc/openvpn/ccd
CCD example (per-client)
cr0x@server:~$ sudo cat /etc/openvpn/ccd/alice
ifconfig-push 10.60.0.10 255.255.255.0
Good hygiene. Still not enforcement. Enforcement is the firewall in FORWARD or equivalent nftables chain.
iptables example (legacy but still seen)
cr0x@server:~$ sudo iptables -S FORWARD
-P FORWARD DROP
-A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A FORWARD -i tun0 -s 10.60.0.0/24 -d 10.20.30.40/32 -p tcp -m tcp --dport 5432 -j ACCEPT
-A FORWARD -i tun0 -j LOG --log-prefix "VPN-DROP " --log-level 4
-A FORWARD -i tun0 -j DROP
Decision: if policy isn’t DROP, fix it. If you see -A FORWARD -i tun0 -j ACCEPT anywhere, that’s the “open bar” rule. Remove it.
Server-side controls: assume the VPN is lying
If the target server is important, put a firewall policy on it too. The VPN gateway is a single control point, which is great—until it isn’t.
A misapplied rule, a second VPN instance, an emergency change, or a cloud security group exception can bypass your gateway.
On the target: allow only VPN subnet to the service
Example: target is 10.20.30.40, service is Postgres 5432/tcp. Allow from VPN subnet only, drop others.
cr0x@server:~$ sudo nft list ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
iif "lo" accept
ct state established,related accept
# Allow Postgres only from VPN subnet
ip saddr 10.50.0.0/24 tcp dport 5432 accept
# Allow admin SSH only from a management subnet (example)
ip saddr 10.1.2.0/24 tcp dport 22 accept
limit rate 5/second burst 10 log prefix "TARGET-DROP " flags all counter drop
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
If this is a web app instead of a database, you can be stricter: only allow the proxy/gateway IP, not the whole VPN subnet.
Short joke #2: The fastest way to learn “defense in depth” is to rely on one firewall and then meet the intern with sudo.
Practical tasks: 12+ real checks with commands, outputs, and decisions
These tasks are written like you’re on call and someone is asking, “Can the VPN user reach only that one service?”
Each task includes: a command, what realistic output looks like, what it means, and what you decide next.
Task 1: Confirm the VPN interface is up and has the expected subnet
cr0x@server:~$ ip -brief addr show wg0
wg0 UNKNOWN 10.50.0.1/24
Meaning: the VPN interface exists and has the expected address. If it’s missing or the subnet is wrong, your firewall rules may not match.
Decision: if the interface name differs (e.g., wg-office), update firewall rules to match the real interface.
Task 2: Verify WireGuard peers and handshake state
cr0x@server:~$ sudo wg show wg0
interface: wg0
public key: (redacted)
listening port: 51820
peer: (redacted)
endpoint: 203.0.113.10:53422
allowed ips: 10.50.0.10/32
latest handshake: 1 minute, 12 seconds ago
transfer: 18.23 MiB received, 41.77 MiB sent
Meaning: the tunnel is alive. If handshake is “never,” don’t waste time on firewall policy yet—fix connectivity/auth first.
Decision: no handshake → check UDP reachability and keys; handshake OK → move on to routing and firewall.
Task 3: Ensure IP forwarding is enabled on the VPN gateway (if you expect routing)
cr0x@server:~$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
Meaning: the kernel will forward packets between interfaces. If it’s 0, your “allowlist” is perfect and useless.
Decision: set to 1 only if this host is meant to route. If you’re using a proxy-only design, you may want forwarding off.
Task 4: Inspect nftables forward policy and counters
cr0x@server:~$ sudo nft list chain inet filter forward
table inet filter {
chain forward {
type filter hook forward priority filter; policy drop;
ct state established,related accept
iif "wg0" ip saddr 10.50.0.0/24 ip daddr 10.20.30.40 tcp dport 5432 accept
iif "wg0" limit rate 10/second burst 20 log prefix "VPN-DROP " flags all counter drop
counter packets 1294 bytes 188290 drop
}
}
Meaning: default drop is in place, and you have an explicit allow. The counters tell you if traffic is hitting drops.
Decision: if drop counters increment when a user tries the allowed service, your allow rule is wrong (wrong IP, port, interface, or protocol).
Task 5: Confirm the route to the target network on the VPN gateway
cr0x@server:~$ ip route get 10.20.30.40
10.20.30.40 via 10.20.30.1 dev eth0 src 10.20.30.10 uid 0
cache
Meaning: the gateway knows how to reach the target and which source IP it will use. If it routes out the wrong interface, your NAT/firewall expectations break.
Decision: wrong path → fix routing before touching firewall. Firewalls don’t fix topology.
Task 6: Validate return-path routing (from target back to VPN subnet)
cr0x@server:~$ ssh cr0x@10.20.30.40 'ip route get 10.50.0.10'
10.50.0.10 via 10.20.30.10 dev eth0 src 10.20.30.40 uid 0
cache
Meaning: the target knows to return traffic to the VPN gateway for VPN clients. If it doesn’t, you’ll see “SYN sent, no reply” behavior.
Decision: if return path is wrong and you can’t change it, consider NAT on the gateway (but keep it tightly scoped).
Task 7: Test the single allowed port from a VPN client perspective (using nc)
cr0x@server:~$ nc -vz 10.20.30.40 5432
Connection to 10.20.30.40 5432 port [tcp/postgresql] succeeded!
Meaning: network path and firewall rules allow the port. This does not mean authentication is correct—only that TCP connects.
Decision: if this fails, check drops on gateway/target. If it succeeds, enforce that everything else fails.
Task 8: Prove denial: attempt a forbidden port on the same target
cr0x@server:~$ nc -vz 10.20.30.40 22
nc: connect to 10.20.30.40 port 22 (tcp) failed: Operation timed out
Meaning: timeout is often “dropped by firewall.” A “Connection refused” would mean the firewall allowed it but the service isn’t listening.
Decision: if you get “refused” on a forbidden port, you’re allowing too much. Fix the gateway/target firewall.
Task 9: Prove denial: attempt an internal host that should be unreachable
cr0x@server:~$ nc -vz 10.20.30.41 5432
nc: connect to 10.20.30.41 port 5432 (tcp) failed: Operation timed out
Meaning: you’re restricting by destination IP, not “any server on that port.” Good.
Decision: if it connects, your allow rule is too broad (destination subnet instead of host, or a later “accept all” rule).
Task 10: Check conntrack for unexpected flows (spot “it connected somehow”)
cr0x@server:~$ sudo conntrack -L -p tcp 2>/dev/null | head -n 5
tcp 6 431999 ESTABLISHED src=10.50.0.10 dst=10.20.30.40 sport=49822 dport=5432 src=10.20.30.40 dst=10.50.0.10 sport=5432 dport=49822 [ASSURED] mark=0 use=1
tcp 6 119 SYN_SENT src=10.50.0.10 dst=10.20.30.40 sport=49830 dport=22 src=10.20.30.40 dst=10.50.0.10 sport=22 dport=49830 mark=0 use=1
Meaning: you can see attempted connections, even failed ones. SYN_SENT entries that linger can indicate drops or return-path issues.
Decision: if you see established flows to destinations/ports you didn’t allow, your policy is leaking—audit firewall order and any NAT exemptions.
Task 11: Watch drop logs live (without drowning)
cr0x@server:~$ sudo journalctl -k -f | grep 'VPN-DROP' | head
Dec 28 10:22:01 vpn-gw kernel: VPN-DROP IN=wg0 OUT=eth0 SRC=10.50.0.10 DST=10.20.30.40 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=51123 DF PROTO=TCP SPT=49830 DPT=22 WINDOW=64240 SYN
Meaning: the firewall is actively blocking. You get concrete evidence: source IP, destination, and port.
Decision: if you see drops to unexpected internal IPs, you’ve proven why least privilege matters. Keep it blocked.
Task 12: Confirm that the target server sees real client IPs (or not)
cr0x@server:~$ ssh cr0x@10.20.30.40 'sudo ss -tnp sport = :5432 | head -n 5'
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
ESTAB 0 0 10.20.30.40:5432 10.50.0.10:49822 users:(("postgres",pid=1421,fd=7))
Meaning: the target sees the VPN client IP (10.50.0.10). If it instead sees the VPN gateway IP, NAT is in play.
Decision: prefer seeing real client IPs for auditing and per-user controls. If you must NAT, compensate with gateway logs and stronger identity at the app.
Task 13: Verify DNS behavior (accidental internal discovery is common)
cr0x@server:~$ resolvectl status | sed -n '1,40p'
Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Current DNS Server: 1.1.1.1
DNS Servers: 1.1.1.1 8.8.8.8
Meaning: this client is not using internal DNS, so internal naming isn’t being leaked through the tunnel (or used to enumerate services).
Decision: if you push internal DNS to VPN clients, ensure you’re not also allowing access to everything those names resolve to.
Task 14: Prove your firewall rules persist across reboots
cr0x@server:~$ sudo systemctl is-enabled nftables
enabled
Meaning: the firewall will come back after restart. If it’s disabled, your “tight policy” might be a runtime-only miracle.
Decision: enable it, and store rules in a managed config. If you’re doing this by hand on prod, at least write it down and commit it.
Fast diagnosis playbook
When “VPN user can’t reach the one allowed port” happens, you need a fast way to find the choke point without interpretive dance.
Here’s the order that wastes the least time.
1) Confirm the tunnel is up (don’t debug policy without a tunnel)
- WireGuard:
sudo wg show wg0→ look for recent handshake and increasing transfer counters. - OpenVPN: check server logs and client status; confirm the client has an IP in the expected VPN pool.
If there’s no handshake, you’re in “network reachability/keys/auth” territory, not segmentation territory.
2) Confirm the client is actually trying the right destination/port
- Run
nc -vz target portfrom the client. - Check whether it times out (drop) or refuses (service unreachable but allowed).
3) Look for drops on the VPN gateway first
- Inspect nftables/iptables counters and logs for the attempted flow.
- If the gateway doesn’t see it, the client might not be routing into the tunnel, or you’re testing from the wrong interface.
4) Validate routing and return path
- On gateway:
ip route get target. - On target:
ip route get vpn_client_ip.
Routing is the #1 reason “it should work” doesn’t work, especially when NAT is half-configured.
5) Finally, check the target’s local firewall and service binding
- Target firewall: confirm it allows from VPN subnet to service port.
- Service: confirm it’s listening on the right interface (not just localhost).
Common mistakes (symptoms → root cause → fix)
Mistake 1: “We pushed only one route, so users can only reach one server”
Symptoms: A user can reach other internal IPs by adding routes manually, or by using existing routes and NAT hairpins.
Root cause: Confusing client routing configuration with authorization.
Fix: Enforce allowlist on the VPN gateway forward path (and ideally on the target). Default deny. Log drops.
Mistake 2: Allowing “VPN subnet → any” temporarily, then forgetting
Symptoms: Weeks later, a security review finds VPN users can RDP into random servers. Nobody remembers why.
Root cause: Emergency changes without rollback or expiry.
Fix: Time-box exceptions. Use config management and code review for firewall rules. Add an alert for broad accepts from VPN interfaces.
Mistake 3: Misreading “Connection refused” as “blocked”
Symptoms: You believe the firewall is working because tests fail, but they fail with “refused,” not “timeout.”
Root cause: The firewall is allowing the traffic; the service is rejecting it because it’s not listening or access-controlled.
Fix: For forbidden paths, you want timeouts/drops (or explicit rejects if that’s your policy). Check firewall counters and server ss -lntp.
Mistake 4: NAT hides client identity, breaking auditing and server-side controls
Symptoms: Target logs show all connections from the VPN gateway IP; per-user rate limits or allowlists fail.
Root cause: Masquerade applied broadly, often as a quick fix for return routing.
Fix: Prefer real routing. If you must NAT, scope it to one destination/port and compensate with gateway logs and strong app auth.
Mistake 5: Firewall rules on the wrong chain (INPUT vs FORWARD)
Symptoms: Gateway firewall looks strict, but VPN clients can still reach internal services.
Root cause: Rules were applied to INPUT (traffic to the gateway) instead of FORWARD (traffic through the gateway).
Fix: Put allow/deny rules in the forwarding path. Keep INPUT for protecting the gateway itself.
Mistake 6: IPv6 is quietly bypassing your IPv4-only policy
Symptoms: You blocked IPv4, yet users can still reach things. Packet captures show IPv6 flows.
Root cause: Dual-stack clients and networks, with firewall rules written only for IPv4.
Fix: Use table inet in nftables and explicitly control IPv6. Or disable IPv6 on the VPN interface if that’s acceptable.
Mistake 7: DNS leaks internal topology
Symptoms: Even when blocked, VPN users can query internal DNS and enumerate service names.
Root cause: DNS server reachable from VPN subnet, or DNS allowed “because it’s harmless.”
Fix: Don’t allow internal DNS unless required. If required, allow DNS only to a resolver that is filtered/split-horizon and doesn’t reveal everything.
Mistake 8: Logging every dropped packet without rate limiting
Symptoms: Kernel logs flood; disks fill; incident response becomes “why is syslog down?”
Root cause: Unthrottled log rules combined with scanning or misconfigured clients.
Fix: Rate-limit logs, aggregate counters, and sample intelligently. Your logging should survive bad days, not cause them.
Checklists / step-by-step plan
Step-by-step plan: from “open VPN” to “one server/port only”
- Pick the enforcement point. Use the VPN gateway as the primary choke point. Decide if you also enforce on the target server (you should).
- Define the exact tuple: source (VPN subnet), destination IP, destination port, protocol. Write it down. If it’s “whatever the app uses,” you’re not done.
- Create a dedicated VPN subnet. Don’t mix it with existing internal ranges. Segmentation starts with clean address boundaries.
- Set default deny in the forwarding path. On the gateway, the forward chain should be DROP by default.
- Add one allow rule. Fully qualified. No subnets unless the requirement is truly multiple targets.
- Handle return routing. Prefer a route on the target (or upstream router) back to the VPN subnet via the gateway. Use NAT only if you must.
- Lock down the target server. Allow only VPN subnet (or gateway/proxy IP) to the service port.
- Decide on DNS. If the client doesn’t need internal DNS, don’t provide it. If it does, filter it and scope it.
- Make it observable. Counters, rate-limited logs, and a habit of checking them after changes.
- Test denial, not just allowance. Try to connect to forbidden ports and hosts and make sure it fails the way you expect.
- Make it persistent. Systemd services enabled, configs in version control, and changes reviewed.
- Run a periodic access review. Requirements creep; your firewall should not silently “creep” with it.
Change control checklist (for the grown-ups in the room)
- Is there an explicit allowlist rule for the one server/port? (Not a vague subnet rule.)
- Is the default forward policy DROP?
- Do logs/counters confirm only that flow succeeds?
- Is there an expiry on any temporary exception?
- Can we prove return routing without broad NAT?
- Is IPv6 handled intentionally?
- Is the target server also restricted?
Three corporate mini-stories from the trenches
Mini-story 1: The incident caused by a wrong assumption
A mid-sized company had a “vendor VPN” for a payroll integration. The vendor needed HTTPS access to one internal endpoint.
The network team pushed a single route to that host and felt proud of their restraint. They even wrote it down in a change ticket.
Months later, an internal audit showed the vendor account could reach a file server. Nobody believed it. The route table on the vendor laptop looked clean.
But the VPN gateway was also doing NAT to “make things easier,” and the forward policy was permissive: it allowed the VPN subnet to reach internal networks because
“otherwise troubleshooting is hard.”
The vendor never intended to access the file server. That wasn’t the point. The point was that a compromised vendor endpoint now had a paved path to internal assets.
The assumption was “routing equals restriction.” The reality was “routing is suggestion; firewalls are enforcement.”
The fix was dull: default drop on forward, one allow rule, and a server-side firewall on the endpoint.
The hard part was organizational: admitting that “we pushed one route” was security theater.
Mini-story 2: The optimization that backfired
Another place wanted to reduce load on their VPN gateway. They enabled split tunneling and pushed internal DNS so laptops could resolve internal services.
The idea: only the needed app traffic goes through the tunnel; everything else stays local. Bandwidth graphs looked nicer. Everyone high-fived quietly.
Then came the weird bug: a subset of users intermittently accessed the “one allowed service” but sometimes hit timeouts.
The incident channel filled with confident guesses: “WireGuard is flaky,” “the database is overloaded,” “it’s probably MTU.”
The actual issue was operationally mundane. With split tunneling and internal DNS, clients were resolving internal names while off-network,
and some laptops had local network overlaps (home routers using the same private range as corporate subnets). The clients sometimes routed “internal” traffic locally,
never entering the VPN. So the gateway’s firewall rules weren’t even in the path.
The “optimization” saved bandwidth but created a routing lottery. The resolution was to stop relying on DNS+split tunneling for deterministic security.
They pinned the target by IP for VPN access, fixed address overlaps, and forced the one service’s traffic through the tunnel. Boring, predictable, correct.
Mini-story 3: The boring but correct practice that saved the day
A financial services team had a habit that looked unnecessary: every firewall change included a “prove denial” test.
Not just “does it work,” but “does everything else fail.” They kept a tiny script that attempted connections to a handful of forbidden internal ports.
One afternoon, a routine OS upgrade on the VPN gateway switched the firewall backend. The rules loaded, but one chain name changed,
and their “allow only one port” policy effectively became “allow anything established,” with too-broad new accept rules added by a legacy script.
Nobody noticed at first, because the allowed service still worked. That’s how these failures hide: success is not proof of restriction.
But their post-change test suite flagged it immediately. The forbidden connection attempts didn’t time out—they connected.
They rolled back in minutes and re-applied the rules correctly. No incident report. No breach narrative. Just a small, boring practice that prevented
a large, exciting problem. This is what reliability looks like when it’s working: nothing happens.
FAQ
1) Can I enforce “one server/port only” with WireGuard AllowedIPs alone?
No. AllowedIPs is routing and peer association. It helps, but it’s not the enforcement boundary you want.
Use firewall rules on the gateway (and ideally on the target).
2) Should I use NAT or proper routing?
Prefer proper routing so the target sees real client IPs and you can do auditing and server-side allowlists.
Use NAT only when you can’t fix return paths, and scope it to the one destination/port.
3) Is “split tunnel” compatible with least privilege?
Sometimes, but it’s easy to get wrong. Split tunnel increases reliance on correct client routing decisions and reduces your ability to centrally observe paths.
For “one server/port only,” forcing that service’s traffic through the tunnel is usually cleaner.
4) Where should I enforce restrictions: gateway, target, or both?
Both. Gateway enforcement reduces blast radius and gives you central logging. Target enforcement protects against bypass paths and misconfigurations.
Defense in depth, without the marketing voice.
5) How do I restrict by user, not just by VPN subnet?
The simplest path is per-user VPN IPs and rules matching ip saddr 10.50.0.10 rather than the whole subnet.
Better still: put the service behind an identity-aware proxy so access is tied to auth, not just source IP.
6) What about access to DNS and time servers?
Don’t automatically allow them. If your one service is specified by IP, you may not need internal DNS at all.
If you must provide DNS, allow it only to a controlled resolver and treat DNS as sensitive metadata.
7) Is dropping traffic better than rejecting it?
Dropping (timeout) leaks less about what exists, but can make troubleshooting slower.
Rejecting can make user experience clearer. For VPN segmentation, I usually drop by default and log rate-limited, then use explicit rejects only when needed.
8) How do I prove to auditors that access is restricted?
Show the firewall rules (default deny + single allow), show counters/logs of denied attempts, and show test evidence:
connection to the allowed port succeeds, connections to forbidden ports/hosts time out. If you can’t produce that quickly, you’re not really controlling it.
9) What about IPv6—do I need to care?
Yes, because your clients already do. If you firewall only IPv4, you can accidentally allow IPv6 paths.
Use nftables inet tables or explicitly disable IPv6 on the VPN interface if that fits your environment.
10) If the service is HTTPS, should I still do “one port only”?
Yes, but also consider publishing the service through a reverse proxy where you can enforce identity, mTLS, request logging,
and rate limits. Network restriction is necessary; it’s not sufficient.
Conclusion: next steps that stick
“Office VPN access” is not a binary choice between “no access” and “everything inside.” You can—and should—design for one server and one port.
That’s what least privilege looks like when you can measure it.
- Pick your enforcement point: implement default-deny forwarding on the VPN gateway.
- Write one precise allow rule: interface + source + destination IP + protocol + port.
- Validate return routing: fix routes first; use tightly scoped NAT only if unavoidable.
- Add server-side restrictions: the target should accept the service port only from the VPN subnet (or gateway/proxy).
- Prove denial after every change: test forbidden ports/hosts and watch drop counters/logs with rate limiting.
Do this, and your VPN stops being a hallway into the building. It becomes a locked door with a specific key.
That’s not paranoia; that’s professional hygiene.