The classic setup: an office, a warehouse, and a fleet of IP cameras that someone bought on a Friday because “they were on sale.”
Now you need remote access, stable video, and a network that doesn’t collapse the minute a warehouse barcode scanner gets chatty.
If you connect everything to one flat LAN and toss a VPN on top, you’ll get connectivity. You’ll also get surprises:
lateral movement, broadcast storms, mystery latency, and camera firmware that thinks “security” means Telnet.
Let’s build something you can operate at 2 a.m. without bargaining with the gods.
The mental model: VPN is transport, VLAN is containment
People mix up VPNs and VLANs because both “separate” things. They separate different layers of reality.
A VPN is a tunnel: it carries packets between sites or clients. A VLAN is a local segmentation tool: it splits a switching domain
so your cameras don’t share L2 space with laptops and random IoT.
If you remember one rule, make it this: VLANs are for limiting blast radius; firewall rules are for limiting intent; VPNs are for reach.
“Segmentation” without firewalling is just elaborate cable management.
In multi-site setups (office + warehouse), VLANs happen inside each site, and the VPN moves selected traffic between sites.
You generally do not want to stretch VLANs across the VPN unless you have a very specific, testable reason and an appetite for pain.
L2 over VPN is where hope goes to die.
Opinionated guidance: route between VLANs at Layer 3 (firewall or L3 switch), enforce policy there, and route between sites over VPN.
Keep it boring. Boring networks stay up.
One quote worth keeping on a sticky note: “Hope is not a strategy.”
— often attributed to operations culture (widely repeated; phrasing varies).
Treat it as a paraphrased idea if you’ve seen a different version. The point stands.
Interesting facts and historical context (the stuff that explains today’s mess)
- 802.1Q VLAN tagging became the standard in the late 1990s; before that, “VLANs” were often vendor-specific and happily incompatible.
- IPsec has been around since the 1990s too, designed for IPv6 originally, then retrofitted into everything else because reality happens.
- NAT became default not because it was elegant, but because IPv4 exhaustion was predictable and we ignored it anyway.
- Spanning Tree Protocol (STP) exists because humans are physically incapable of not making loops with cables.
- PoE (Power over Ethernet) made cameras explode in popularity: one cable for power and network means facilities teams suddenly “do networking.”
- ONVIF was created to standardize IP camera interoperability; it helped, but it didn’t magically make camera security good.
- “Air-gapped” is older than modern IT; it’s a concept from classified environments. In corporate networks, it usually means “connected to Wi‑Fi sometimes.”
- WPA2-Enterprise showed up in the mid-2000s and made per-user Wi‑Fi authentication practical; it’s still underused in warehouses.
- WireGuard is new compared to IPsec/OpenVPN (designed in the mid-2010s), with a smaller codebase and very sane defaults.
There’s a theme here: the standards are mature. The failures are usually human, organizational, or “we rushed procurement.”
A reference design that works in real buildings
Sites and roles
Assume two physical sites: Office and Warehouse. You may also have remote users (IT/admins) and maybe a managed security vendor
that needs limited access to cameras/NVR.
At each site, you want:
- A routing/firewall device (could be a dedicated firewall, or a router with firewall features).
- A managed switch capable of VLANs and trunks; ideally with PoE for cameras/APs.
- Wireless APs that support multiple SSIDs mapped to VLANs (guest vs corp vs scanners).
Segmentation pattern: “default deny, allow by service”
Create separate VLANs per trust zone. The usual minimum viable set:
- Corp: user devices, laptops, desktops.
- Servers: AD/DNS/NVR/ERP, whatever your org calls “critical.”
- Cameras: IP cameras and related door controllers.
- Warehouse/OT: scanners, label printers, handheld terminals, PLC-adjacent gear.
- Guest: internet-only. No access to anything internal.
- Management: switch/AP management interfaces, out-of-band if you can swing it.
Then enforce inter-VLAN access on your firewall/router. If your L3 switch can route, great—but still centralize policy
if you care about auditability. “Distributed ACLs everywhere” is how you end up with tribal knowledge and one person who can’t take vacations.
Where the VPN belongs
Put a site-to-site VPN between the edge firewalls/routers of Office and Warehouse. Route only the subnets you need.
If you’re tempted to route 0.0.0.0/0 across the tunnel for “simplicity,” stop. That’s not simplicity, it’s postponing complexity until it’s on fire.
Remote-access VPN for humans should terminate at the office (or a central hub) and then be subject to the same policy:
users land in a dedicated VPN subnet/VLAN, then are allowed to reach specific services.
Joke #1: If your camera VLAN can reach the accounting VLAN, congratulations—you’ve invented an expensive way to store footage and ransomware in the same place.
IP plan and VLAN map you won’t hate later
Good IP plans are boring. Boring is reliable. Pick an RFC1918 block and subdivide consistently per site.
One pattern that scales without getting cute:
- Office: 10.10.0.0/16
- Warehouse: 10.20.0.0/16
Inside each site, use /24s per VLAN:
- VLAN 10 Corp: 10.10.10.0/24 (Office), 10.20.10.0/24 (Warehouse)
- VLAN 20 Servers: 10.10.20.0/24, 10.20.20.0/24
- VLAN 30 Cameras: 10.10.30.0/24, 10.20.30.0/24
- VLAN 40 Warehouse/OT: 10.10.40.0/24 (maybe unused), 10.20.40.0/24
- VLAN 50 Guest: 10.10.50.0/24, 10.20.50.0/24
- VLAN 99 Management: 10.10.99.0/24, 10.20.99.0/24
Why this pattern works
Humans debug networks. They don’t “optimize” them. With predictable addressing, you can glance at an IP and know
site + trust zone. That matters when you’re staring at logs at night.
DNS and DHCP placement
Put authoritative DNS and DHCP in the server VLAN (or dedicated infra VLAN). Use DHCP relay (IP helper) on VLAN interfaces
if you centralize DHCP. Cameras often hate fancy DHCP options; keep it minimal unless you know the model quirks.
Static IPs for cameras and infrastructure are usually worth it. If you do DHCP reservations instead, export/backup them.
“We’ll remember the IPs” is a myth told to junior admins like Santa.
Firewall policy: what talks to what (and why)
Baseline rules (opinionated)
- Guest → any internal: deny. Always. No exceptions.
- Cameras → internet: deny by default. Allow NTP to your own NTP, and maybe vendor update endpoints only if you can’t do offline updates.
- Cameras → NVR: allow only required ports (often RTSP, ONVIF, vendor-specific). Prefer camera-initiated streams to NVR if feasible.
- Corp → Cameras: usually deny. If users need live view, route via the NVR/VMS web app, not direct camera access.
- Warehouse/OT → Servers: allow only the app ports required (ERP, printing, scanner services). No “any/any because scanners.”
- Management VLAN: only IT VPN subnet and a small set of admin workstations should reach it (SSH/HTTPS/SNMP).
Between sites (over VPN)
You don’t need “site-to-site full trust.” You need specific services:
- Warehouse scanners to Office ERP app servers
- Warehouse NVR to Office storage (if you centralize footage)
- IT admins to manage warehouse switches/APs/firewall
Route and allow only those subnets/ports. If you later need more, add it intentionally. Every rule should have an owner and a reason.
Logging: what to log without drowning
Log denies between VLANs for a while after rollout; it’s how you discover the one forgotten service.
But don’t log every allow. You’ll pay for disk and learn nothing.
Log:
- Denied attempts from Cameras/Guest to internal resources
- New outbound destinations from the camera VLAN (should be rare)
- VPN tunnel up/down events
- Inter-VLAN denies affecting critical apps (Warehouse/OT to Servers)
VPN topologies: hub-and-spoke, full mesh, and “please don’t”
Hub-and-spoke (recommended for most orgs)
Office is the hub; warehouse is a spoke. Remote-access users terminate at the hub.
Advantages: one place to manage policy, fewer tunnels, simpler troubleshooting.
Disadvantages: hairpin traffic if warehouse needs to reach a third site; hub bandwidth becomes critical.
Full mesh (only when you have multiple warehouses and strong automation)
Mesh is viable when you have solid config management and monitoring. If you’re doing it by hand in a web GUI,
you’re building a future outage.
L2 extension over VPN (avoid)
Stretching VLANs across sites makes broadcast and failure domains cross buildings. STP doesn’t magically become polite across a tunnel.
If you’re trying to make “one big LAN,” step back and ask what requirement you’re really solving—usually legacy discovery or a vendor that can’t route.
WireGuard vs IPsec (practical tradeoffs)
WireGuard is clean and fast, with fewer knobs to mis-set. IPsec is everywhere and often hardware-accelerated, but configuration varies wildly by vendor.
Either can be reliable. The reliability comes from: consistent routing, MTU sanity, and disciplined firewall policy.
Cameras and NVRs: stop treating them like printers
Threat model, in plain language
Cameras are computers with lenses. Many ship with weak default configs, long-lived firmware bugs, and a surprising amount of outbound “phone home.”
Your goal isn’t to make cameras perfect. Your goal is to make camera compromise boring: isolated, logged, and unable to pivot.
Best practices that actually work
- No direct inbound from the internet to cameras or NVR. If you need remote viewing, use VPN or a hardened relay.
- Block camera VLAN egress except NTP/DNS to internal resolvers and whatever else you explicitly approve.
- Use an NVR/VMS in the server VLAN or a dedicated security VLAN, not sitting inside the camera VLAN like a piñata.
- Disable UPnP everywhere. Especially at the edge. UPnP is how devices negotiate “surprise exposure.”
- Time sync matters: if logs and footage timestamps drift, investigations become fan fiction.
Joke #2: UPnP is like giving your toaster a master key to the building because it promised to “manage ports responsibly.”
Bandwidth planning for video
Cameras are steady-state bandwidth hogs. Do the math. A handful of 4K cameras at decent bitrate can saturate uplinks and Wi‑Fi.
Keep camera traffic local to the site when possible (warehouse cameras to warehouse NVR). If you must centralize footage,
consider scheduled replication or lower-bitrate substreams over the WAN.
Multicast and discovery gotchas
Some camera deployments rely on multicast discovery. Multicast does not automatically traverse VLANs.
Avoid designs that require “let’s just allow multicast everywhere.” Instead, define where discovery is needed (often only during provisioning),
and then lock it down. If the vendor requires L2 adjacency for permanent operation, treat that as a product risk, not a “network issue.”
Practical tasks: commands, outputs, and decisions (12+)
The following tasks assume you have Linux boxes at each site (or on the firewall if it’s Linux-based), plus managed switches and a firewall.
Commands are intentionally generic and portable. The point is operational: what you run, what you expect, and what you decide.
Task 1: Confirm VLAN interfaces exist and are up
cr0x@server:~$ ip -br link
lo UNKNOWN 00:00:00:00:00:00
eth0 UP 52:54:00:aa:bb:cc
eth0.10 UP 52:54:00:aa:bb:cc
eth0.30 UP 52:54:00:aa:bb:cc
eth0.99 DOWN 52:54:00:aa:bb:cc
Meaning: VLAN subinterfaces exist; VLAN 99 is down.
Decision: if VLAN 99 is management and should be live, bring it up and verify switch trunk allows VLAN 99.
If it’s unused, remove it to reduce confusion.
Task 2: Verify IP addressing on each VLAN interface
cr0x@server:~$ ip -br addr show
lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 203.0.113.10/24
eth0.10 UP 10.10.10.1/24
eth0.30 UP 10.10.30.1/24
Meaning: Router has gateway IPs for VLAN 10 and 30.
Decision: ensure each VLAN has exactly one gateway; if you see multiple devices advertising gateways (VRRP aside), you’re inviting asymmetric routing.
Task 3: Check routing table for expected site subnets and VPN routes
cr0x@server:~$ ip route
default via 203.0.113.1 dev eth0
10.10.10.0/24 dev eth0.10 proto kernel scope link src 10.10.10.1
10.10.30.0/24 dev eth0.30 proto kernel scope link src 10.10.30.1
10.20.40.0/24 via 10.255.0.2 dev wg0
10.20.30.0/24 via 10.255.0.2 dev wg0
Meaning: warehouse VLANs 30/40 are reachable over wg0.
Decision: if you see overly broad routes (like 10.20.0.0/16) but you intended only camera/NVR access, tighten AllowedIPs / phase2 selectors and firewall policy.
Task 4: Validate WireGuard tunnel state and last handshake
cr0x@server:~$ sudo wg show
interface: wg0
public key: rJc...redacted
listening port: 51820
peer: 9hS...redacted
endpoint: 198.51.100.20:51820
allowed ips: 10.20.30.0/24, 10.20.40.0/24, 10.255.0.2/32
latest handshake: 32 seconds ago
transfer: 1.23 GiB received, 2.01 GiB sent
Meaning: the tunnel is alive; traffic is flowing.
Decision: if handshake is stale and transfer counters don’t move, suspect NAT/port forwarding, firewall UDP blocks, or mismatched keys.
Task 5: Spot MTU trouble with a controlled ping
cr0x@server:~$ ping -M do -s 1420 10.20.30.10 -c 3
PING 10.20.30.10 (10.20.30.10) 1420(1448) bytes of data.
1428 bytes from 10.20.30.10: icmp_seq=1 ttl=63 time=18.2 ms
1428 bytes from 10.20.30.10: icmp_seq=2 ttl=63 time=18.5 ms
1428 bytes from 10.20.30.10: icmp_seq=3 ttl=63 time=18.4 ms
--- 10.20.30.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
Meaning: 1420-byte payload works; MTU is likely fine for typical WireGuard overhead.
Decision: if this fails but smaller sizes succeed, lower MTU on wg0 (or on the WAN interface) and re-test video streams.
Task 6: Confirm DHCP leases for a VLAN (and detect rogue DHCP)
cr0x@server:~$ sudo journalctl -u isc-dhcp-server -n 10 --no-pager
dhcpd[1123]: DHCPDISCOVER from 3c:84:6a:11:22:33 via eth0.30
dhcpd[1123]: DHCPOFFER on 10.10.30.51 to 3c:84:6a:11:22:33 via eth0.30
dhcpd[1123]: DHCPREQUEST for 10.10.30.51 from 3c:84:6a:11:22:33 via eth0.30
dhcpd[1123]: DHCPACK on 10.10.30.51 to 3c:84:6a:11:22:33 via eth0.30
Meaning: DHCP server is responding on camera VLAN.
Decision: if clients report weird addresses and you see no logs, suspect rogue DHCP on that VLAN; enforce DHCP snooping on switches or lock ports to camera MACs.
Task 7: Verify inter-VLAN firewall policy with counters
cr0x@server:~$ sudo nft list ruleset | sed -n '1,80p'
table inet filter {
chain forward {
type filter hook forward priority filter; policy drop;
ct state established,related accept
iifname "eth0.30" oifname "eth0.20" tcp dport { 554, 8000 } accept
iifname "eth0.10" oifname "eth0.30" drop
iifname "eth0.50" drop
}
}
Meaning: default drop; cameras can reach servers on RTSP/vendor ports; corp to cameras is blocked; guest is blocked.
Decision: if video doesn’t record, confirm the allow is in the correct direction (camera→NVR or NVR→camera) based on your architecture.
Task 8: Trace a packet path across VLAN and VPN
cr0x@server:~$ traceroute -n 10.20.30.10
traceroute to 10.20.30.10 (10.20.30.10), 30 hops max, 60 byte packets
1 10.10.30.1 0.401 ms 0.322 ms 0.310 ms
2 10.255.0.2 17.901 ms 17.872 ms 17.841 ms
3 10.20.30.10 18.309 ms 18.292 ms 18.266 ms
Meaning: hop 2 is the remote WireGuard peer; routing is sane.
Decision: if you see it go to the internet default gateway, you’re missing routes or AllowedIPs; fix routing before blaming the camera.
Task 9: Validate DNS behavior from a restricted VLAN
cr0x@server:~$ dig @10.10.20.53 nvr.office.local +short
10.10.20.20
Meaning: internal DNS resolves NVR hostname.
Decision: if cameras/NVR rely on hostnames and DNS fails across VLANs, allow DNS from that VLAN to your resolver—or stop using hostnames for cameras and go static IPs.
Task 10: Check for L2 loop symptoms (broadcast storms) via interface stats
cr0x@server:~$ ip -s link show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
RX: bytes packets errors dropped missed mcast
987654321 1234567 0 12345 0 987654
TX: bytes packets errors dropped carrier collsns
876543210 1122334 0 0 0 0
Meaning: high multicast/broadcast (mcast) and drops could indicate a loop or noisy devices.
Decision: check STP status on switches, find the looping port, and shut it. If drops are on WAN, you may be saturating uplink with video.
Task 11: Capture traffic on the camera VLAN to confirm who initiates RTSP
cr0x@server:~$ sudo tcpdump -ni eth0.30 host 10.10.20.20 and port 554 -c 5
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
IP 10.10.30.51.49210 > 10.10.20.20.554: Flags [S], seq 120033, win 64240, options [mss 1460,sackOK,TS val 1200 ecr 0,nop,wscale 7], length 0
IP 10.10.20.20.554 > 10.10.30.51.49210: Flags [S.], seq 99321, ack 120034, win 65160, options [mss 1460,sackOK,TS val 2200 ecr 1200,nop,wscale 7], length 0
Meaning: camera initiates connection to NVR on 554; your firewall rule should allow camera→NVR.
Decision: if you see the opposite (NVR initiating), adjust firewall direction accordingly.
Task 12: Confirm NAT is not accidentally applied to inter-site traffic
cr0x@server:~$ sudo nft list table ip nat
table ip nat {
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifname "eth0" masquerade
}
}
Meaning: masquerade applies on WAN interface eth0 only, which is correct.
Decision: if NAT is applied to wg0 or internal VLANs, you’ll break address-based policy and create weird one-way traffic. Fix NAT scoping.
Task 13: Check if a camera is trying to reach the internet
cr0x@server:~$ sudo nft add rule inet filter forward iifname "eth0.30" oifname "eth0" log prefix "CAMERA-EGRESS " drop
cr0x@server:~$ sudo journalctl -k -n 5 --no-pager
kernel: CAMERA-EGRESS IN=eth0.30 OUT=eth0 SRC=10.10.30.51 DST=93.184.216.34 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=2244 DF PROTO=TCP SPT=51233 DPT=80 WINDOW=64240 SYN
Meaning: the camera attempted outbound HTTP to a public IP.
Decision: leave this drop rule (without log once you’re satisfied), then decide whether the camera needs any controlled egress for updates—prefer manual updates.
Task 14: Validate switch trunk tagging from the Linux side (VLAN sees traffic)
cr0x@server:~$ sudo tcpdump -eni eth0 vlan 30 -c 3
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
02:00:00:00:00:00 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 64: vlan 30, p 0, ethertype ARP (0x0806), Request who-has 10.10.30.1 tell 10.10.30.51, length 46
Meaning: VLAN 30 tags are present; trunking is working at least for that VLAN.
Decision: if you don’t see tags for expected VLANs, fix switchport mode (trunk vs access) and allowed VLAN list.
Fast diagnosis playbook
When something breaks, you don’t need genius. You need order. Here’s the sequence that finds the bottleneck quickly.
First: Is it local, inter-VLAN, or across the VPN?
- From a host in the same VLAN, ping the target IP. If that fails, it’s local (switching, DHCP, device down).
- From a host in a different VLAN at the same site, ping the gateway then the target. If gateway works but target fails, it’s firewall/routing/policy.
- From office to warehouse (or vice versa), ping across the VPN. If local works but cross-site fails, focus on VPN/routing/MTU.
Second: Confirm routing and policy before staring at packets
- Check route tables on the routers (local and remote). Missing route beats “mystery bug” almost every time.
- Check firewall counters/logs for denies. If you don’t log denies during rollout, you’re debugging blind.
- Confirm NAT is not being applied to internal/VPN traffic.
Third: Validate MTU and fragmentation across the VPN
- Use PMTU pings with
-M doand adjust MTU on tunnel interfaces if needed. - Video streams failing while pings succeed is a classic MTU symptom: small packets work; large ones get blackholed.
Fourth: Only then capture traffic
- Capture on the VLAN interface and on the tunnel interface. If you see it on VLAN but not on tunnel, routing/policy is dropping it.
- If you see it on tunnel but not at destination, it’s remote-side policy/routing—or the destination is lying about being up.
Fifth: Check for “network isn’t the network”
- Camera CPU pegged, NVR storage full, or DNS broken can look like network issues.
- Always confirm service health: is RTSP port open, is the NVR recording, is time correct?
Common mistakes: symptom → root cause → fix
1) “Cameras randomly go offline”
Symptom: intermittent camera loss, especially when the warehouse is busy.
Root cause: uplink saturation or bufferbloat; video + scanner traffic fighting on the same link/queue.
Fix: keep video local to site; apply QoS for OT/scanner traffic; upgrade uplinks; cap camera bitrates; avoid Wi‑Fi for fixed cameras.
2) “VPN is up but nothing routes”
Symptom: tunnel handshake ok, but subnets unreachable.
Root cause: missing routes, wrong AllowedIPs/phase2 selectors, or asymmetric routing.
Fix: ensure both sides have routes to each other’s VLAN subnets via the tunnel; verify firewall allows forward between wg0 and VLAN interfaces.
3) “Everything works except the NVR web UI from remote”
Symptom: ping works, but HTTPS/HTTP times out or loads partially.
Root cause: MTU blackhole over VPN or MSS clamping missing.
Fix: lower tunnel MTU; enable MSS clamping on firewall; retest with large-payload ping and real browser traffic.
4) “Guest Wi‑Fi can see printers/servers”
Symptom: guest users can reach internal IPs.
Root cause: guest SSID bridged to corp VLAN, or firewall rule order allows it.
Fix: map guest SSID to guest VLAN; enforce deny at gateway; verify AP trunk tagging and switchport VLAN config.
5) “New camera install breaks half the network”
Symptom: sudden broadcast storm, high CPU on switches, packet loss everywhere.
Root cause: loop introduced (two ports patched together), or a cheap switch doing something creative.
Fix: enable STP (and BPDU guard on access ports); shut flapping ports; disallow unmanaged switches on camera runs unless explicitly approved.
6) “Warehouse scanners are slow after segmentation”
Symptom: scanner app delays after moving scanners into OT VLAN.
Root cause: missing DNS access, blocked ports to app servers, or reliance on broadcast discovery that doesn’t cross VLANs.
Fix: explicitly allow DNS/NTP and required app ports; replace broadcast discovery with DNS records or a small helper service; document dependencies.
7) “We can’t update camera firmware anymore”
Symptom: firmware updates fail after blocking camera internet access.
Root cause: updates require cloud endpoints or the VMS is acting as a proxy you accidentally blocked.
Fix: update via NVR/VMS if supported; temporarily allow egress to specific destinations during maintenance windows; log and remove the rule afterward.
Three corporate mini-stories from the trenches
Incident caused by a wrong assumption: “The VPN is encrypted, so it’s safe”
A mid-sized company added a warehouse and rolled out a site-to-site VPN. The plan was simple: “connect the networks.”
They routed the entire office /16 to the warehouse and vice versa. No segmentation changes. No meaningful firewalling. The VPN was considered the security boundary.
It worked beautifully until a warehouse PC—used for printing labels and occasionally “checking email”—caught a commodity infostealer.
The malware did what malware does: scanned the network, found file shares and an old RDP endpoint, and walked right into the office environment.
The VPN didn’t cause the breach; it removed friction.
The most painful part wasn’t remediation. It was the post-incident meeting where everyone realized they had treated “encrypted transport”
as “segmentation.” Those are different problems. Encryption protects packets from eavesdropping; it doesn’t stop a compromised host from being rude.
The fix was unglamorous: split the warehouse into OT, cameras, and admin VLANs; locked down inter-VLAN flows; restricted the VPN to only necessary subnets;
and forced remote admin through a dedicated VPN subnet with MFA. Nothing fancy. Just boundaries that matched intent.
Optimization that backfired: “Let’s centralize all camera recording over the WAN”
Another org wanted a “single pane of glass” for video. They placed the NVR/VMS in the office data room and streamed all warehouse cameras across the VPN.
On paper: easier management, one storage system, one backup plan.
In practice: the WAN link became the critical path for physical security. When the warehouse got busy, camera streams degraded.
When the ISP had a brownout, the warehouse lost recording entirely. Nobody was happy, including the security team that had been promised “more reliability.”
They tried to “optimize” by increasing VPN compression and tweaking ciphers. CPU went up, latency got worse, and the video didn’t improve.
They were tuning the tunnel when the real constraint was physics: bandwidth and queueing on the uplink.
The eventual solution was predictable: a local warehouse recorder with local retention, plus scheduled replication of motion events and low-res streams to the office.
Central view remained possible, but it stopped being a single point of failure. The WAN became an enhancer, not a requirement.
Boring but correct practice that saved the day: “Document the VLANs and test like you mean it”
A retail distributor had an office, two warehouses, and a security vendor that occasionally needed access to camera views.
Their network lead insisted on a written matrix: VLANs, subnets, gateways, DHCP scopes, and a simple table of allowed flows.
No one was excited. It felt like bureaucracy with extra steps.
Months later, a switch at Warehouse A died and had to be replaced fast. A contractor installed the replacement and—because contractors are human—
mis-tagged the trunk: cameras landed in the corp VLAN. Everything “worked,” but it was wrong.
The reason it got caught quickly: the documented flow matrix and a basic validation script the team ran after any change.
The script checked that camera VLAN devices couldn’t reach the internet and that corp VLAN couldn’t hit camera IPs directly.
The test failed immediately. They fixed the trunk config before security footage quietly became an unmonitored exfiltration channel.
The lesson wasn’t “contractors bad.” The lesson was that boring controls—documentation, repeatable tests, and default-deny policies—turn mistakes into alerts,
not incidents.
Checklists / step-by-step plan
Phase 1: Design decisions (do this before touching cables)
- Define trust zones: Corp, Servers, Cameras, Warehouse/OT, Guest, Management.
- Pick a consistent IP plan per site (e.g., 10.10/16 office, 10.20/16 warehouse) and /24 per VLAN.
- Decide where the NVR/VMS lives (prefer local recording per site; central view is optional).
- Decide VPN model: hub-and-spoke unless you have strong reasons otherwise.
- Write an allowlist matrix: which VLAN can reach which service (ports) in which direction.
Phase 2: Switching (L2) build
- Create VLANs on switches; name them clearly (CAMERAS, GUEST, MGMT).
- Configure trunks between switch and firewall/router; allow only needed VLANs.
- Configure access ports: cameras as access VLAN 30, scanners as VLAN 40, etc.
- Enable STP and BPDU guard on edge ports to reduce loop blast radius.
- If supported, enable DHCP snooping and dynamic ARP inspection on user/camera VLANs.
Phase 3: Routing + firewalling (L3) build
- Assign gateway IPs to VLAN interfaces on the firewall/router.
- Set default forward policy to deny.
- Add explicit allows:
- Cameras → NVR ports
- Corp → Server apps (ERP, email, file services)
- Warehouse/OT → specific server apps + printing
- VPN users → management interfaces (limited)
- Block camera VLAN egress to internet; allow NTP/DNS to internal services.
- Enable logging on denies between key VLANs during rollout.
Phase 4: VPN integration
- Bring up site-to-site tunnel between site firewalls.
- Route only required subnets across the VPN.
- Confirm MTU with do-not-fragment pings; set tunnel MTU/MSS clamp if needed.
- Test business-critical flows: scanners to ERP, NVR to cameras, IT to management VLAN.
Phase 5: Operational hygiene (the part that keeps it working)
- Back up firewall and switch configurations after changes.
- Keep an IP/VLAN map in version control (even if it’s just a text file).
- Monitor VPN tunnel health and interface errors/drops.
- Run quarterly access review: remove old rules, vendors, and exceptions.
- Patch cameras/NVR on a schedule; rotate credentials; disable unused services.
FAQ
1) Do I really need a separate VLAN for cameras?
Yes. Cameras are high-risk and high-bandwidth. A dedicated camera VLAN reduces lateral movement risk and makes bandwidth controls feasible.
Put the NVR/VMS behind a firewall rule boundary, not in the same trust zone as everything else.
2) Can I just put cameras on the guest network?
No. Guest networks are usually internet-only and not designed for stable internal services. Also, you don’t want cameras talking to the internet.
Cameras belong in a restricted internal VLAN with explicit access to the NVR and time/DNS services.
3) Should I route between VLANs on an L3 switch or the firewall?
If you need high east-west throughput and simple policy, L3 switching is fine. If you need auditable, centralized security policy,
route on the firewall. Many orgs do both: L3 switch for core routing, firewall for inter-zone policy via VRFs or transit VLANs.
If you’re small-to-mid size, firewall inter-VLAN routing is usually easiest to operate.
4) Is WireGuard “more secure” than IPsec?
Security is more about configuration and key management than brand names. WireGuard’s smaller codebase and modern defaults help reduce foot-guns.
IPsec is proven and widely supported. Pick the one you can operate consistently and monitor properly.
5) Do I need to encrypt traffic inside the LAN if I already have VLANs?
VLANs don’t encrypt; they separate broadcast domains. If you have sensitive traffic (credentials, video access), use TLS/HTTPS and secure protocols.
For management access, use SSH/HTTPS and consider a management VPN subnet.
6) Why not extend the office VLAN to the warehouse so everything “just works”?
Because it makes failures travel. Broadcast, loops, and misconfigurations become cross-site outages. Routed designs are more predictable,
easier to secure, and easier to troubleshoot. If a vendor requires L2 adjacency, challenge the requirement or isolate it tightly.
7) How do I give a security vendor access to camera views without giving them the keys to everything?
Terminate vendor access on a dedicated VPN profile/subnet. Allow that subnet to reach only the VMS/NVR UI (and maybe a jump host),
not the camera VLAN. Log the access. Time-limit it if possible. Vendors don’t need your entire network; they need a narrow doorway.
8) What’s the simplest way to keep camera timestamps correct?
Provide internal NTP and allow camera VLAN to reach it. Block arbitrary internet NTP. If you have multiple sites,
sync site NTP sources and keep timezone configuration consistent across VMS and cameras.
9) How do I prevent “someone plugged a random switch in the warehouse” incidents?
Use port security (MAC limits), DHCP snooping, and BPDU guard on access ports. Label ports. Educate facilities teams.
And accept that this will still happen—so monitor for it and make it a small incident, not a headline.
10) What if I need to view warehouse cameras from the office?
Prefer office users connecting to the warehouse VMS/NVR UI over the VPN, not direct camera access. If bandwidth is a concern,
use substreams for live view and keep full-resolution recording local.
Conclusion: practical next steps
If you want office + warehouse + cameras to coexist safely, do the boring things on purpose:
segment with VLANs, enforce intent with firewall policy, and use the VPN as transport—not as a magical security blanket.
Keep camera traffic local. Route between sites. Log denies during rollout. Shrink your trust zones until they match reality.
Next steps you can execute this week:
- Write your VLAN/subnet plan (even if it’s a one-page doc) and reserve VLAN IDs for growth.
- Create a camera VLAN and move two cameras as a pilot; block camera egress to the internet and confirm NVR recording still works.
- Restrict the site-to-site VPN to only required subnets; remove any “route everything” shortcuts.
- Run the MTU test across the VPN and fix fragmentation issues before users discover them through broken video.
- Turn on deny logging between zones for two weeks, then clean up rules based on what you learn.
The goal isn’t perfection. It’s a network where failures are contained, changes are predictable, and the cameras don’t get to have opinions about your security posture.