OpenVPN: Set It Up Correctly (and Why It’s Often Slower Than WireGuard)

Was this helpful?

You deploy a VPN expecting “secure tunnel” and get “remote employees buffering their own spreadsheets.”
Tickets arrive with the same three words: “VPN is slow.” Somewhere between crypto, MTU, queues, and an innocent checkbox that turned UDP into TCP,
your tunnel became a performance art piece.

OpenVPN can be solid, stable, and operationally boring—in the best way. But if you’re comparing raw throughput and latency,
WireGuard often wins for structural reasons. This isn’t fanboy stuff. It’s architecture, context switches, and where the packets get handled.

A mental model of “VPN speed” that actually helps

“VPN speed” is not one thing. It’s a stack of bottlenecks waiting for the right workload to expose them.
If you don’t name the layers, you’ll optimize the wrong one and declare victory while users keep suffering.

Layer 1: Path quality (loss, jitter, and bufferbloat)

VPNs amplify mediocre networks. If the last mile drops packets, encryption doesn’t fix it; it makes loss harder to diagnose.
A tunnel also changes packet size and pacing, which interacts with buffers in consumer routers and LTE networks.
Latency spikes feel like “slow VPN” even when throughput tests look fine.

Layer 2: Encapsulation overhead (MTU/MSS)

Your applications think they’re sending 1500-byte Ethernet frames. Your VPN adds headers.
If the effective MTU is smaller and you don’t clamp MSS or adjust MTU, you get fragmentation or blackholed packets.
Fragmentation looks like random stalls. Blackholing looks like “some sites load, some don’t.”

Layer 3: Crypto and packet processing

This is where OpenVPN vs WireGuard becomes visible. WireGuard lives in the kernel on Linux and keeps the hot path lean.
Classic OpenVPN is mostly userspace. Userspace isn’t inherently slow, but it pays extra in context switches, copies,
and scheduling, especially at high packet rates.

Layer 4: Transport choices (UDP vs TCP, and the TCP-over-TCP trap)

UDP is usually the right choice for a VPN data plane. TCP inside TCP can create a feedback loop where both layers retransmit,
both layers back off, and performance collapses under loss. It’s not theoretical; it’s how you get 2 Mbps on a gig link.

Interesting facts and history (the stuff that explains today’s tradeoffs)

  • OpenVPN dates to 2001, built around TLS and a flexible userland design. That flexibility is why it runs almost everywhere—and why it’s heavier.
  • WireGuard started in the mid-2010s with a deliberately small codebase and modern crypto primitives, optimized for simplicity and auditability.
  • OpenVPN became the “enterprise default” partly because it was early and worked through NAT/firewalls well, long before “zero trust” was a slide deck.
  • OpenVPN’s cipher story evolved from older CBC modes and HMAC combos toward AES-GCM and modern defaults, but legacy configs linger for years.
  • For a long time, OpenVPN performance tuning meant “pick UDP and pray”; later improvements like ovpn-dco (data channel offload) arrived to reduce userspace overhead.
  • WireGuard uses Noise-based handshakes and a concept of “cryptokey routing,” which reduces control-plane complexity and avoids runtime renegotiation drama.
  • OpenVPN’s feature set is huge (auth plugins, pushed routes, scripting hooks, multiple auth methods). Each feature is a lever, and levers invite accidents.
  • On Linux, kernel fast paths matter: getting packets processed without bouncing between kernel and userspace is often the difference between “fine” and “why is one core at 100%?”

One operational paraphrased idea from a reliability notable applies here: paraphrased idea: “Simplicity is a prerequisite for reliability.” — John Ousterhout (paraphrased idea)

Set up OpenVPN correctly: the baseline that stops self-inflicted pain

Pick a sane topology: routed (tun) unless you have a very specific reason

Use dev tun and route IP subnets. Bridging (tap) drags L2 broadcast into the tunnel, increases chatter,
and makes troubleshooting feel like chasing raccoons in a drop ceiling.

Use UDP for the data plane

Unless your environment absolutely blocks UDP, choose it. TCP mode is for “I’m stuck in a hotel Wi‑Fi portal from 2009.”
If you control both ends, don’t do TCP as a default. It will betray you under loss.

Use modern, hardware-accelerated crypto

On typical x86 servers, AES-GCM with AES-NI is fast. ChaCha20-Poly1305 is great too, especially on devices without AES acceleration.
Don’t get clever with exotic ciphers “for security.” Security includes availability, and a VPN that crawls encourages users to bypass it.

Keep the config boring

“Boring” means: minimal plugins, minimal scripts, minimal custom pushes, minimal per-client overrides.
Every extra knob is a new failure mode, and OpenVPN has enough knobs to qualify as a pipe organ.

Plan MTU and MSS up front

You want the tunnel to avoid fragmentation across the real internet path. In practice you clamp MSS for TCP sessions
so endpoints don’t try to send too-large segments. This is not optional if you want “works everywhere.”

Logging and observability are part of setup

If you can’t answer “is it CPU, MTU, loss, or congestion?” in five minutes, you didn’t set up a production VPN.
You set up a mystery novel.

Joke #1: OpenVPN configs are like kitchen junk drawers—everything works until you need one thing quickly.

Why OpenVPN is often slower than WireGuard (without religion)

Userspace vs kernel datapath

WireGuard on Linux runs in-kernel, so packets enter the kernel, get encrypted/decrypted, and move on with fewer transitions.
Classic OpenVPN typically reads packets from a TUN device in userspace, encrypts/decrypts, then writes back.
That means more context switches and memory copies. At low throughput you won’t care. At high packet rates you will.

Packet rate matters more than bandwidth for CPU. Lots of small packets (VoIP, gaming, RPC, chatty microservices)
can max a CPU core long before you saturate a link. OpenVPN’s overhead shows up here first.

Protocol complexity and control-plane chatter

OpenVPN is built atop TLS, supports renegotiation, multiple auth mechanisms, and a rich set of extensions.
That’s powerful. It’s also more moving parts: more state, more code paths, and more ways to land on a slow path.
WireGuard’s model is intentionally narrower.

Crypto choices and implementation details

OpenVPN can be very fast with the right cipher, but it can also be painfully slow with the wrong one.
AES-CBC + HMAC might be okay, but AES-GCM is typically better, and outdated defaults can still lurk in older deployments.
WireGuard standardizes modern primitives; you don’t “pick” from a menu, so you don’t pick the wrong thing.

TCP-over-TCP meltdown is real

If you run OpenVPN over TCP and carry TCP application traffic inside it, you’ve created two congestion control loops.
Loss triggers retransmissions at both layers; latency rises; windows shrink; throughput collapses.
WireGuard is UDP-only; it avoids this entire category by design.

MTU handling and fragmentation penalties

Both OpenVPN and WireGuard can suffer if MTU is wrong. But OpenVPN deployments more often inherit legacy settings like
fixed MTUs, extra compression (don’t), and hand-me-down scripts. WireGuard configs tend to be shorter and newer,
which means fewer historical landmines.

DCO changes the comparison

OpenVPN with Data Channel Offload (ovpn-dco) moves the data plane closer to the kernel, cutting overhead substantially.
If you need OpenVPN’s enterprise features but want WireGuard-ish performance, DCO is the first thing to evaluate.
Not every platform supports it equally, and operational maturity varies by distro.

Joke #2: Running TCP over TCP for “reliability” is like wearing two raincoats and complaining you can’t move your arms.

Fast diagnosis playbook: find the bottleneck in minutes

When someone says “OpenVPN is slow,” you need a repeatable triage order. Don’t start by changing ciphers.
Start by proving where the time and drops live.

First: confirm the obvious (transport, path, MTU)

  1. Check whether you’re using UDP. If TCP: suspect TCP-over-TCP meltdown immediately.
  2. Check MTU/MSS symptoms. If “some sites hang” or “large downloads stall,” suspect MTU.
  3. Check loss and jitter. If loss exists, throughput will fall even with perfect crypto.

Second: confirm whether the server is CPU or packet-rate bound

  1. Look for a single core pinned. OpenVPN often bottlenecks on one thread handling crypto or I/O.
  2. Measure softirq pressure and NIC drops. If you’re dropping before OpenVPN sees packets, tuning OpenVPN is cosmetic.
  3. Check cipher and kernel offload options. If you’re running a slow cipher or missing DCO, you’ll feel it.

Third: validate routing/NAT and queueing

  1. Check qdisc and bufferbloat. Wrong queue discipline can add huge latency under load.
  2. Confirm correct routing. If traffic hairpins through a firewall or crosses AZs needlessly, your “VPN problem” is topology.
  3. Measure throughput with and without the tunnel. You need a baseline to avoid chasing ghosts.

Practical tasks with commands: what to run, what it means, what you decide

These tasks assume a Linux OpenVPN server. Adjust paths for your distro. Each task includes: command, example output, meaning, and decision.
Run them in roughly this order when you’re in production fire-drill mode.

Task 1: Confirm OpenVPN is using UDP (not TCP)

cr0x@server:~$ sudo ss -lunpt | grep openvpn
udp   UNCONN 0      0            0.0.0.0:1194       0.0.0.0:*    users:(("openvpn",pid=1324,fd=6))

What it means: The server is listening on UDP/1194. If you see tcp instead, expect worse performance under loss.
Decision: If TCP is present by accident, switch to UDP unless you have a strict firewall constraint.

Task 2: Check negotiated cipher and data channel parameters

cr0x@server:~$ sudo journalctl -u openvpn-server@server --since "1 hour ago" | egrep -i "Data Channel|Cipher|peer info" | tail -n 8
Dec 27 09:20:11 server openvpn[1324]: Control Channel: TLSv1.3, cipher TLS_AES_256_GCM_SHA384
Dec 27 09:20:11 server openvpn[1324]: [client1] Peer Connection Initiated with [AF_INET]203.0.113.10:51234
Dec 27 09:20:11 server openvpn[1324]: [client1] Data Channel: cipher 'AES-256-GCM', peer-id: 0

What it means: You’re on AES-256-GCM for the data channel. Good default on AES-NI hardware.
Decision: If you see CBC-only ciphers or weird legacy settings, modernize. If clients are weak ARM devices, consider ChaCha20 if supported by your deployment.

Task 3: Check whether DCO is active (if you expect it)

cr0x@server:~$ sudo modprobe -n ovpn-dco && lsmod | grep ovpn
insmod /lib/modules/6.1.0-18-amd64/kernel/drivers/net/ovpn/ovpn-dco.ko
ovpn_dco               94208  0

What it means: The DCO module exists and is loaded.
Decision: If you expected DCO but it’s missing, stop tuning userland first. Fix the platform capability and OpenVPN build/package choices.

Task 4: Measure server CPU saturation and single-thread pinning

cr0x@server:~$ top -b -n 1 | head -n 12
top - 09:31:02 up 32 days,  3:11,  2 users,  load average: 3.12, 2.98, 2.71
Tasks: 189 total,   1 running, 188 sleeping,   0 stopped,   0 zombie
%Cpu(s): 12.3 us,  2.1 sy,  0.0 ni, 85.2 id,  0.0 wa,  0.0 hi,  0.4 si,  0.0 st
MiB Mem :  32118.5 total,  11234.7 free,   4231.9 used,  16651.9 buff/cache
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 1324 nobody    20   0  143832  21572   9856 R  98.7   0.1  10:44.02 openvpn

What it means: OpenVPN is burning one CPU core (~99%). This is classic throughput ceiling behavior for userland OpenVPN.
Decision: If this correlates with “slow VPN,” you’re CPU/packet-rate bound. Consider DCO, scaling out, or switching workloads to WireGuard.

Task 5: Confirm AES-NI / crypto acceleration is available

cr0x@server:~$ grep -m1 -o 'aes\|avx\|pclmulqdq' /proc/cpuinfo | sort -u
aes
pclmulqdq

What it means: CPU has AES and PCLMULQDQ flags—good signs for fast AES-GCM.
Decision: If AES flags are absent (common on some small VMs or old hardware), expect AES-GCM to be slower. Consider ChaCha20 if your OpenVPN stack supports it cleanly.

Task 6: Check NIC errors and drops (don’t blame OpenVPN for a bad NIC day)

cr0x@server:~$ ip -s link show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    RX:  bytes packets errors dropped  missed   mcast
    9876543210 8432190      0    1274       0  10233
    TX:  bytes packets errors dropped carrier collsns
    8765432109 7321987      0       9       0      0

What it means: RX drops exist. Could be bursts, ring buffer limits, or upstream congestion.
Decision: If drops climb during complaints, investigate NIC queues, driver, and host congestion before editing OpenVPN configs.

Task 7: Look for UDP receive buffer pressure

cr0x@server:~$ netstat -su | egrep -i "receive errors|packet receive errors|RcvbufErrors|InErrors"
    204 packet receive errors
    198 RcvbufErrors

What it means: Kernel is dropping UDP packets due to receive buffer limits.
Decision: Increase socket buffers and review sysctl settings; otherwise you’ll chase “VPN slowness” that’s actually kernel drops.

Task 8: Inspect key sysctls for UDP buffers and forwarding

cr0x@server:~$ sudo sysctl net.core.rmem_max net.core.wmem_max net.ipv4.ip_forward net.ipv4.udp_rmem_min
net.core.rmem_max = 212992
net.core.wmem_max = 212992
net.ipv4.ip_forward = 1
net.ipv4.udp_rmem_min = 4096

What it means: Buffers are small for a busy VPN server.
Decision: If you saw RcvbufErrors, raise these carefully and monitor. If ip_forward is off, routing will break and you’ll get “VPN connects but nothing works.”

Task 9: Confirm routing and that return paths exist

cr0x@server:~$ ip route show table main | head
default via 198.51.100.1 dev eth0
10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.1
198.51.100.0/24 dev eth0 proto kernel scope link src 198.51.100.20

What it means: The VPN subnet exists and is attached to tun0.
Decision: If the route is missing, your OpenVPN instance isn’t creating the tunnel device or is misconfigured. Fix that before performance work.

Task 10: Check NAT rules if clients need internet access

cr0x@server:~$ sudo iptables -t nat -S | egrep 'POSTROUTING|MASQUERADE'
-A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE

What it means: VPN client traffic is NATed out eth0.
Decision: If clients complain “connected but no internet,” missing NAT is a usual suspect (assuming you’re doing full-tunnel).

Task 11: Detect MTU blackholes with ping + DF (don’t fragment)

cr0x@server:~$ ping -c 3 -M do -s 1472 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 1472(1500) bytes of data.
ping: local error: message too long, mtu=1420
ping: local error: message too long, mtu=1420
ping: local error: message too long, mtu=1420

--- 1.1.1.1 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2042ms

What it means: Effective MTU along the path (from this host/interface context) is around 1420. Your 1500-byte assumption fails.
Decision: Set an appropriate tunnel MTU and/or clamp MSS. If you ignore this, you’ll get intermittent stalls and “works on Wi‑Fi, fails on LTE.”

Task 12: Check MSS clamping is in place (when routing/NAT requires it)

cr0x@server:~$ sudo iptables -t mangle -S | grep TCPMSS
-A FORWARD -o tun0 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
-A FORWARD -i tun0 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

What it means: MSS is being clamped based on PMTU, reducing fragmentation issues for TCP flows.
Decision: If missing and you have MTU issues, add clamping and retest the “some sites hang” symptom.

Task 13: Measure baseline throughput outside the VPN

cr0x@server:~$ iperf3 -c 198.51.100.50 -P 4 -t 10
[SUM]   0.00-10.00  sec  4.82 GBytes  4.14 Gbits/sec  0             sender
[SUM]   0.00-10.00  sec  4.78 GBytes  4.10 Gbits/sec  12            receiver

What it means: The raw network path can do ~4.1 Gbit/s.
Decision: If VPN throughput is far below this, it’s not the uplink. It’s your tunnel processing, MTU, or endpoint constraints.

Task 14: Measure throughput through the VPN (server-to-client or client-to-server)

cr0x@server:~$ iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------

What it means: You’re ready to test from a connected VPN client to the server’s VPN IP.
Decision: Compare VPN vs non-VPN throughput. If the gap is huge and server CPU is pinned, you’ve found your limit.

Task 15: Inspect qdisc (queue discipline) for latency under load

cr0x@server:~$ tc -s qdisc show dev eth0
qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
 Sent 8765432109 bytes 7321987 pkt (dropped 9, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0

What it means: fq_codel is active, drops are low, backlog is empty. This is usually good for latency.
Decision: If you see a dumb FIFO qdisc and big backlog, expect bufferbloat. Fix the queueing before blaming the VPN.

Task 16: Check per-process packet rate and context switches

cr0x@server:~$ pidstat -p $(pgrep -n openvpn) -w 1 3
Linux 6.1.0-18-amd64 (server)  12/27/2025  _x86_64_  (8 CPU)

09:44:01      UID       PID   cswch/s nvcswch/s  Command
09:44:02      65534    1324    120.00   8400.00  openvpn
09:44:02      65534    1324    115.00   8600.00  openvpn
09:44:03      65534    1324    118.00   8550.00  openvpn

What it means: Many non-voluntary context switches can indicate contention and scheduler pressure, common when a userspace datapath is busy.
Decision: If this is high during throughput tests, you’ll benefit from DCO, fewer packets (MTU/aggregation), or moving heavy flows to a more efficient tunnel.

Three corporate mini-stories (realistic, anonymized, mildly painful)

Mini-story 1: The incident caused by a wrong assumption

A mid-sized company rolled out OpenVPN to connect retail sites back to headquarters. The rollout looked clean: same appliance model, same config template,
same ISP tier on paper. They validated one pilot location and then cloned it to forty more.

Two weeks later, support calls started: “inventory tool times out,” “credit card batching fails,” “VPN is flaky.” The VPN stayed connected.
Pings worked. But larger transfers would stall unpredictably. Engineers did what engineers do under pressure: they changed ciphers and bumped keepalives.
Nothing changed.

The wrong assumption was simple: “MTU is 1500 everywhere.” Several ISPs were delivering PPPoE or otherwise constrained links.
Inside the tunnel, packets grew, got fragmented, and some paths dropped fragments. Classic PMTU blackholing.
The stalls were most visible on larger HTTPS responses and database queries with big result sets.

The fix was boring: enforce MSS clamping, set a conservative tunnel MTU, and re-test from the worst sites.
The incident closed without heroics, just humility and a permanent note in the deployment checklist:
never assume MTU, always measure it.

Mini-story 2: The optimization that backfired

An internal platform team wanted more throughput from their OpenVPN concentrator. They found an old blog post suggesting larger socket buffers
and cranked net.core.rmem_max and wmem_max dramatically. They also increased OpenVPN’s internal buffers.
Synthetic tests improved on a clean lab network. They shipped it.

A month later, a different complaint arrived: “VPN feels laggy during calls.” Not “slow downloads”—lag. Video meetings stuttered.
SSH sessions felt sticky. The throughput graphs looked fine. The team was confused, which is a popular corporate emotion.

The backfire was queueing. Huge buffers plus a busy link created bufferbloat: packets sat in queues longer, and interactive traffic suffered.
Voice and video are latency-sensitive. They don’t care that your tunnel can push more bits per second if those bits arrive late.

They rolled back to moderate buffers, ensured fq_codel on the egress interface, and shaped traffic slightly below the physical uplink
to control queue location. Throughput dropped a bit. User experience improved a lot. The real lesson: “faster” is not a scalar.
It depends on what your users are doing and what your network is hiding.

Mini-story 3: The boring but correct practice that saved the day

A security-conscious org ran both WireGuard and OpenVPN: WireGuard for managed laptops, OpenVPN for contractors and “weird devices”
that couldn’t run the standard client. The OpenVPN service wasn’t glamorous, so it was treated like any other production service:
pinned package versions, a change window, and a canary.

One day, they needed to rotate certificates and adjust cipher preferences for compliance. The change was scheduled and tested.
The canary group was small but diverse: one macOS client, one Windows client, one Linux client, one mobile client.
They watched logs for negotiation failures and measured throughput and reconnect rates.

The canary caught a surprise: an older contractor client didn’t support the new cipher suite combination the team had chosen.
In production-at-scale, that would have become a Monday morning incident with an executive audience.
Instead, they adjusted the cipher list to keep a compatible fallback and pushed a client upgrade recommendation.

Nothing dramatic happened. That’s the point. The boring practice—canary plus a small compatibility matrix—prevented a compatibility outage.
You don’t get a trophy for this. You get to eat lunch.

Common mistakes: symptom → root cause → fix

1) “VPN is connected, but some websites never finish loading”

Root cause: MTU/PMTU blackholing; fragmentation dropped on some paths; missing MSS clamping.
Fix: Measure effective MTU with DF pings; clamp MSS (TCPMSS --clamp-mss-to-pmtu); set conservative tun-mtu if needed.

2) “It’s fast on office Wi‑Fi but terrible on LTE or hotel networks”

Root cause: Path loss/jitter plus UDP buffer pressure; sometimes UDP is throttled by captive networks; MTU variability.
Fix: Ensure UDP buffers aren’t dropping; consider a secondary TCP listener only for hostile networks; tune MTU; validate with real-world tests.

3) “Throughput tops out at ~100–300 Mbps on a big server”

Root cause: OpenVPN userspace is CPU/packet-rate bound; one core pinned; slow cipher or no AES acceleration.
Fix: Move to AES-GCM with hardware support; enable DCO where possible; scale out horizontally; consider WireGuard for bulk throughput workloads.

4) “Latency explodes during large downloads”

Root cause: Bufferbloat from oversized buffers or poor qdisc; congested uplink; queues living in the wrong place.
Fix: Use fq_codel or CAKE; shape slightly below uplink; don’t blindly inflate socket buffers.

5) “Clients connect, but can’t reach internal subnets”

Root cause: Missing routes on the server or downstream routers; ip_forward disabled; policy routing/NAT conflicts.
Fix: Confirm routes exist, enable forwarding, push correct routes, ensure return path routing is correct (don’t rely on NAT accidentally).

6) “Random reconnects or stalls under load”

Root cause: Packet loss, UDP drops, receive buffer errors, or overloaded server causing delayed keepalives.
Fix: Check netstat -su errors, NIC drops, CPU pinning; reduce packet rate; increase buffers carefully; scale out.

7) “It got slower after we enabled compression”

Root cause: Compression adds CPU overhead and can interact badly with modern encrypted traffic (already compressed) and security constraints.
Fix: Disable compression for the data channel; focus on MTU, crypto, and routing. Compression is not a free lunch; it’s usually not lunch at all.

8) “We switched to TCP to make it more reliable, and now it’s worse”

Root cause: TCP-over-TCP meltdown under loss and reordering.
Fix: Use UDP. If blocked, use TCP as a fallback profile only, and set expectations: it’s a compatibility mode, not a performance mode.

Checklists / step-by-step plan

Step-by-step: a correct OpenVPN baseline for production

  1. Choose routed TUN mode unless you truly need L2 bridging.
  2. Use UDP as the default transport.
  3. Pick modern ciphers (AES-GCM on AES-NI servers; avoid legacy CBC-only setups).
  4. Disable compression for the data channel in modern environments.
  5. Plan MTU: test DF pings; clamp MSS for TCP flows.
  6. Set up routing properly: push routes deliberately; ensure return paths exist.
  7. Decide on NAT vs routed for client internet access and internal reachability. Document it.
  8. Observability: log negotiation, client connects, disconnect reasons; collect CPU, drops, and UDP error counters.
  9. Capacity test using iperf3 through the tunnel with realistic concurrency and packet sizes.
  10. Have a fallback profile (TCP or alternate port) for hostile networks—clearly labeled as slower.
  11. Automate client config distribution and keep a small compatibility matrix.
  12. Patch cadence: treat OpenVPN like any other edge service. Change windows and canaries.

Step-by-step: decide whether to keep OpenVPN or move traffic to WireGuard

  1. Measure the current ceiling: VPN throughput vs CPU utilization and packet drops.
  2. If one core is pinned, evaluate DCO first if you need OpenVPN features.
  3. If you mainly need site-to-site or “managed clients”, WireGuard is often operationally simpler and faster.
  4. Keep OpenVPN for environments needing mature enterprise auth flows and broad client compatibility.
  5. Do a pilot: a small group on WireGuard; compare latency under load, reconnect behavior, and support burden.

FAQ

Is OpenVPN inherently slow?

No. It’s often slower than WireGuard at high packet rates because classic OpenVPN runs the data path in userspace and pays overhead.
With good configs and DCO support, OpenVPN can be plenty fast for many orgs.

Why does OpenVPN sometimes max out one CPU core?

Encryption/decryption and moving packets between kernel and userspace can become single-thread hot paths.
Small packets make it worse because packet rate increases. The symptom is one core pegged while others nap.

Should I run OpenVPN over TCP for reliability?

Not as a default. TCP-over-TCP meltdown is a classic failure mode under loss. Use UDP unless your clients are in networks that block it.
If you must offer TCP, treat it as a fallback profile.

What’s the single most common OpenVPN performance bug?

MTU issues. They masquerade as “random slowness” and “some sites don’t work.” Measure MTU and clamp MSS.
It’s not glamorous, which is why it keeps happening.

Does AES-256-GCM make OpenVPN slower than AES-128-GCM?

Sometimes a little, depending on CPU and implementation, but the bigger wins usually come from architecture and packet rate.
Don’t chase micro-optimizations until you’ve confirmed you’re CPU-bound and using hardware acceleration.

How do I know if I’m packet-loss limited vs CPU limited?

If the server shows UDP receive buffer errors, NIC drops, or path loss (and CPU is not pegged), you’re likely loss/congestion limited.
If OpenVPN is pinned at ~100% on one core during tests, you’re likely CPU/packet-rate limited.

Is WireGuard always faster?

Often, but not always. If your bottleneck is the ISP, Wi‑Fi, LTE loss, or a remote client CPU, switching VPNs won’t create bandwidth.
WireGuard’s edge is strongest on Linux where the in-kernel datapath and lean protocol reduce overhead.

Can I just increase buffers to make OpenVPN faster?

Increasing buffers can reduce drops, but it can also increase latency and make interactive workloads miserable.
Tune buffers based on measured drops and latency under load, and keep sane queue disciplines on egress.

What’s OpenVPN DCO and should I use it?

DCO (Data Channel Offload) moves more packet processing into the kernel, reducing userspace overhead.
If you need OpenVPN features but want better throughput/latency, it’s worth evaluating—carefully, with canaries and client compatibility tests.

When should I keep OpenVPN instead of migrating?

If you rely on mature enterprise auth integrations, complex policy pushes, or broad third-party client compatibility, OpenVPN remains practical.
You can also run both: WireGuard for managed endpoints, OpenVPN for “everything else.”

Conclusion: what to do next

If you want OpenVPN to behave, make it boring: UDP, routed TUN, modern ciphers, no compression, and MTU handled deliberately.
Then instrument it like you mean it—CPU, drops, UDP buffer errors, and queueing.

If you need maximum throughput and low latency at scale on Linux, WireGuard often wins because it was designed to avoid the userspace tax.
If you need OpenVPN’s ecosystem and controls, consider DCO and scale-out designs. Either way: measure first, change second, and write down what you learned so you don’t relearn it at 2 a.m.

← Previous
OEM GPUs: How to Avoid Buying a “Secretly Cut-Down” Card
Next →
Ubuntu 24.04: Resolv.conf Keeps Changing — Fix systemd-resolved/NetworkManager Properly

Leave a comment