You flip on the Proxmox firewall, add a clean “deny all except SSH” rule, and… nothing. The VM still answers on ports you just blocked, or worse, your changes seem to work for a minute and then evaporate. Meanwhile, the GUI looks smugly confident, like it’s already solved the problem.
Nine times out of ten, this isn’t Proxmox “ignoring” you. It’s Linux doing exactly what you asked—just not in the firewall language you think you’re speaking. iptables and nftables can coexist in a way that looks like cooperation but behaves like a cold war.
What you’re really debugging (Netfilter plumbing, not the GUI)
Proxmox firewall rules ultimately land in the Linux kernel’s packet filter (Netfilter). The Proxmox service (pve-firewall) generates rules and applies them using tooling available on the host. If the host is using nftables and your troubleshooting muscle memory is iptables (or vice versa), you can stare at empty chains for an hour while traffic happily flows.
Here’s the key: “iptables” is both (a) a command-line interface and (b) a legacy rule format. On modern Debian/Proxmox, the iptables command may be a compatibility frontend that writes nftables rules (“iptables-nft”). So when you say “show me iptables rules,” you might be looking at a different view than what’s actually active.
Production takeaway: when firewall rules “don’t apply,” you are usually dealing with one of these:
- Rules are being installed into nftables, but you’re inspecting iptables-legacy (or the other way around).
- Rules are installed, but applied in a hook/chain you didn’t expect (bridge vs routed path; raw/mangle/filter priorities).
- Conntrack is letting existing flows continue, making your new rule look useless.
- Another firewall manager (ufw, firewalld, docker, kube-proxy, a “helpful” script) is rewriting tables after Proxmox applies its rules.
- Cluster config differences: the GUI edits one node, another node applies different local settings.
A firewall is like a meeting: half the drama is who got to speak first. (Joke 1/2.)
Interesting facts and historical context (so the weirdness makes sense)
- Netfilter is in-kernel; iptables and nftables are user-space control planes. Both ultimately program the same kernel hooks, but with different rule representations and semantics.
- nftables was designed to replace iptables and reduce duplication across IPv4/IPv6/ARP tables, adding sets, maps, and a cleaner syntax.
- Debian switched the default iptables implementation to “iptables-nft”. That means
iptablescan manipulate nftables rules unless you explicitly select iptables-legacy. - iptables-legacy and nftables can run at the same time without obvious errors. That’s not a feature; it’s a debugging tax.
- Bridge filtering is special. Packets traversing Linux bridges can be filtered via
br_netfilterand sysctls likenet.bridge.bridge-nf-call-iptables, which changes what “INPUT” and “FORWARD” mean in practice. - Conntrack default behavior surprises people. New drops don’t necessarily kill established connections unless you explicitly block or reset them; you often need to flush conntrack for a clean test.
- Docker historically used iptables heavily and can insert NAT/FORWARD rules that trump local expectations—especially when mixed with nftables frontends.
- Rule order and chain priority matter more in nftables because multiple base chains can hook the same stage with explicit priorities.
- Proxmox firewall is host-centric but also manages VM-level rules; those may be implemented via iptables/nftables on the host, not inside the guest, which surprises people coming from “firewall runs where the app runs.”
One reliability paraphrased idea I still like—“hope is not a strategy”—is widely attributed to operations thinking (paraphrased idea). Treat firewall state as something you measure, not something you feel.
Fast diagnosis playbook
When Proxmox firewall rules don’t apply, you want the shortest path to the first real clue. Here’s the order that wins in real incidents.
1) Confirm what backend you’re actually using (nft vs legacy)
- If the host uses nftables (
nft list rulesetshows active chains), stop staring atiptables -Soutputs from the wrong backend. - If both are present, decide which one is authoritative and remove ambiguity.
2) Confirm Proxmox firewall service state and last apply
- Is
pve-firewallrunning and applying rules without errors? - Are you editing the right scope (Datacenter vs node vs VM)?
3) Confirm packets traverse the path you think they do
- Bridge path vs routed path changes chains and hooks.
- Check
br_netfiltersysctls and that the right interface is being filtered.
4) Check for rule churn by other actors
- ufw/firewalld, docker, kube-proxy, custom scripts.
- Look for timestamps in logs and rule deltas.
5) Make your test valid
- Flush conntrack (carefully) or test with new connections.
- Use counters/logging to prove the rule is hit.
How Proxmox firewall works under the hood
Proxmox’s firewall is not a magic packet shield. It’s a rule generator and lifecycle manager. You declare intent in the GUI or in /etc/pve/ config, and pve-firewall translates that into kernel rules.
The rules are typically applied on the host, affecting:
- Host traffic (management plane): SSH, GUI, cluster traffic, storage traffic.
- VM and container traffic: depending on bridge/routing and firewall settings, the host filters traffic crossing bridges or tap interfaces.
Proxmox also has multiple scopes:
- Datacenter firewall: global baseline rules; good for “deny by default” patterns.
- Node firewall: host-specific rules (think IPMI access, local admin networks).
- VM/CT firewall: per-guest allow/deny, useful when teams share the same L2 domain and you want segmentation without re-architecting the network.
If rules “don’t apply,” one common reason is simply that the firewall is enabled in one scope but not the others, or enabled but not on the actual interface used by the traffic (a classic with multiple bridges).
iptables vs nftables: the conflict patterns that break Proxmox rules
Conflict pattern 1: “iptables shows nothing, so there are no rules”
If iptables is the nft frontend, it will show you an iptables-compatible view of rules stored in nftables. But if you’re running iptables-legacy (or your scripts are), you can end up with two different realities:
- Proxmox applies rules via nftables (directly or via iptables-nft)
- Your debug command reads legacy tables
- You conclude “no rules,” and start “fixing” the wrong thing
Conflict pattern 2: Both nftables and legacy iptables are active
This is the most expensive failure mode because it can be half-working. Some traffic is filtered; other traffic is not. NAT behaves inconsistently. A reboot changes behavior depending on which service starts first.
If you want a stable system, pick one. In modern Proxmox on Debian, that usually means nftables as the kernel mechanism, with iptables-nft compatibility if needed. But don’t let iptables-legacy rules linger like old change requests no one wants to own.
Conflict pattern 3: Bridge traffic doesn’t hit the chains you expect
In Proxmox, VM traffic often crosses Linux bridges (vmbr0) and tap devices (tapX) or veth interfaces (containers). Whether bridge traffic gets evaluated by iptables/nftables depends on br_netfilter and sysctls.
If those sysctls are off, your “FORWARD drop” rule might not see bridged packets. You’ll swear the firewall is broken; Linux will politely continue bridging at L2 like you asked it to.
Conflict pattern 4: Conntrack makes you think your new rule didn’t work
You block port 443, but your existing curl session continues. That’s conntrack. Many rules match ct state established,related early, which is correct for performance and for not breaking long-lived sessions—but it confuses testing.
Conflict pattern 5: Another component rewrites rules after Proxmox
The usual suspects: docker, kube-proxy, ufw, firewalld, custom scripts in cron or systemd timers, and some network automation agents. They don’t care about your GUI. They care about their own desired state.
If rules “apply” and then revert, this is your first suspect. Firewall tools are like toddlers: if two of them share the same toys, one will cry. (Joke 2/2.)
Hands-on tasks: commands, outputs, and decisions (12+)
These are the checks I run when someone says “Proxmox firewall rules don’t apply.” Each one includes a realistic output and what decision you make from it.
Task 1: Check if Proxmox firewall is enabled at all scopes
cr0x@server:~$ pve-firewall status
Status: enabled
Running: yes
Log level: info
What it means: The service is running and the firewall is enabled. If you see Status: disabled, stop here—enable it in the GUI or config before chasing iptables ghosts.
Decision: If disabled, enable and re-test. If enabled, continue to rule backend checks.
Task 2: Confirm the pve-firewall service health and recent errors
cr0x@server:~$ systemctl status pve-firewall --no-pager
● pve-firewall.service - PVE Firewall
Loaded: loaded (/lib/systemd/system/pve-firewall.service; enabled)
Active: active (running) since Thu 2025-12-26 10:12:33 UTC; 2h 8min ago
Main PID: 1321 (pve-firewall)
Tasks: 1 (limit: 154850)
Memory: 21.4M
CPU: 4.102s
What it means: If this shows “failed” or rapid restarts, your rules may not be applied or may be applied partially.
Decision: If failing, inspect logs next and fix service/runtime errors before diving into nftables.
Task 3: Read pve-firewall logs for rule-apply failures
cr0x@server:~$ journalctl -u pve-firewall -n 60 --no-pager
Dec 26 10:12:33 server pve-firewall[1321]: starting PVE firewall
Dec 26 10:12:34 server pve-firewall[1321]: status update OK
Dec 26 10:12:35 server pve-firewall[1321]: ruleset applied
What it means: “ruleset applied” is your baseline. Errors about “iptables-restore failed” or “nft: syntax error” immediately narrow the search to tooling/backends.
Decision: If you see restore/apply errors, check the iptables/nftables backend and rule syntax next.
Task 4: Determine whether iptables is using nft backend or legacy backend
cr0x@server:~$ update-alternatives --display iptables
iptables - auto mode
link best version is /usr/sbin/iptables-nft
link currently points to /usr/sbin/iptables-nft
link iptables is /usr/sbin/iptables
/usr/sbin/iptables-legacy - priority 10
/usr/sbin/iptables-nft - priority 20
What it means: This host’s iptables command programs nftables. If your scripts call iptables-legacy explicitly, you may have a split-brain firewall.
Decision: Standardize: pick iptables-nft (common on Debian/Proxmox) unless you have a strong reason otherwise.
Task 5: Inspect nftables ruleset (the real source of truth on nft hosts)
cr0x@server:~$ nft list ruleset | sed -n '1,80p'
table inet filter {
chain input {
type filter hook input priority 0; policy accept;
ct state established,related accept
iifname "lo" accept
}
chain forward {
type filter hook forward priority 0; policy accept;
}
}
What it means: You have a minimal inet filter table with default accept. If Proxmox firewall is “enabled” but you see no Proxmox-generated chains/tables, either it’s not applying to nftables or another tool wiped it.
Decision: If Proxmox rules are missing, confirm whether Proxmox is configured to use nftables/iptables properly and whether a competing service is replacing rulesets.
Task 6: Inspect iptables rules (but ensure you know which backend you’re seeing)
cr0x@server:~$ iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
What it means: This tells you almost nothing alone. On iptables-nft, it’s a compatibility view. If it’s empty while nft shows rules, you’re probably mixing tools or looking at the wrong table family.
Decision: Use nft for authoritative inspection if your iptables is nft-backed. Use iptables-legacy only if you’ve deliberately standardized on legacy (rare and increasingly painful).
Task 7: Check whether iptables-legacy is also populated (split brain detection)
cr0x@server:~$ iptables-legacy -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N PVEFW-INPUT
-A INPUT -j PVEFW-INPUT
What it means: If legacy shows Proxmox chains but nft doesn’t, or vice versa, you have a backend mismatch. Traffic will follow the active kernel rules, not your preferred storytelling.
Decision: Align alternatives and re-apply Proxmox rules. Remove the other backend’s rules to avoid confusion.
Task 8: Confirm kernel modules and sysctls for bridge filtering
cr0x@server:~$ lsmod | grep br_netfilter || true
br_netfilter 32768 0
bridge 311296 1 br_netfilter
cr0x@server:~$ sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
What it means: Bridged packets will traverse netfilter hooks, allowing your host firewall to filter VM traffic on bridges.
Decision: If these are 0, bridged traffic may bypass expected hooks. Decide whether you want bridge filtering on (common for Proxmox VM firewalling) and set it persistently if needed.
Task 9: Confirm which interfaces and bridges carry the traffic
cr0x@server:~$ ip -br link
lo UNKNOWN 00:00:00:00:00:00
eno1 UP 3c:ec:ef:11:22:33
vmbr0 UP 3c:ec:ef:11:22:33
tap100i0 UP fe:1a:2b:3c:4d:5e
What it means: If your rule is applied to vmbr1 but traffic flows via vmbr0, your rule “does nothing” because it never sees packets.
Decision: Map the service path: client → physical NIC → bridge → tap/veth → guest. Filter at the right boundary.
Task 10: Use counters to prove a rule is (or isn’t) being hit (nft)
cr0x@server:~$ nft -a list chain inet filter input
table inet filter {
chain input { # handle 1
type filter hook input priority 0; policy accept;
ct state established,related accept # handle 2
iifname "lo" accept # handle 3
}
}
What it means: If Proxmox generated rules existed, you’d see them here, ideally with counters. No counters means you’re not even looking at the right chain/table or the rules aren’t installed.
Decision: If you can’t find the rule, stop testing traffic and start finding who owns the ruleset.
Task 11: Monitor rule churn in real time (catch the culprit)
cr0x@server:~$ watch -n 1 'nft list ruleset | sha256sum | cut -d" " -f1'
3b26b25bbf6b613c6b6be8e5d2f1c8d0f50e3b0c6c1b86e3e9c6a2d4b9c7e012
What it means: If the hash changes every few seconds or right after you apply Proxmox rules, something else is rewriting the ruleset.
Decision: Identify and disable the competing manager, or integrate it properly. “Two controllers” is not a design pattern.
Task 12: Identify other firewall managers and rule writers
cr0x@server:~$ systemctl list-unit-files | egrep 'ufw|firewalld|nftables|docker|kube|netfilter' || true
docker.service enabled
nftables.service enabled
pve-firewall.service enabled
What it means: If nftables.service is enabled and loads its own /etc/nftables.conf, it may override Proxmox’s desired state, depending on ordering and configuration.
Decision: Either let Proxmox be the owner (common on PVE hosts) or ensure nftables service is compatible and does not flush/replace tables Proxmox relies on.
Task 13: Check whether nftables service flushes rules on reload
cr0x@server:~$ sed -n '1,120p' /etc/nftables.conf
#!/usr/sbin/nft -f
flush ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
}
}
What it means: flush ruleset is a full wipe. If this runs after pve-firewall, your Proxmox rules will “not apply” because they’re being deleted.
Decision: Remove full flush, or disable nftables.service, or integrate Proxmox-managed tables explicitly. Don’t run two competing “authoritative” configs.
Task 14: Confirm Proxmox cluster filesystem config matches expectation
cr0x@server:~$ grep -R "enable:" -n /etc/pve/firewall | head
/etc/pve/firewall/cluster.fw:2:enable: 1
/etc/pve/firewall/server.fw:2:enable: 1
What it means: The firewall enable flags are stored in cluster config. But local node quirks (alternatives, services, sysctls) can still differ and cause inconsistent enforcement.
Decision: If only one node misbehaves, compare backend selection and other services on that node first, not the GUI config.
Task 15: Check conntrack for “rule seems ignored” situations
cr0x@server:~$ conntrack -L 2>/dev/null | head
tcp 6 431999 ESTABLISHED src=10.10.10.50 dst=10.10.10.21 sport=51522 dport=443 src=10.10.10.21 dst=10.10.10.50 sport=443 dport=51522 [ASSURED] mark=0 use=1
What it means: Established flows may continue to pass if your rules accept established connections early (common and correct).
Decision: For testing, open a brand-new connection from a different source port or flush conntrack for specific tuples (more surgical than nuking everything).
Task 16: Flush a specific conntrack flow (surgical test)
cr0x@server:~$ conntrack -D -p tcp --orig-src 10.10.10.50 --orig-dst 10.10.10.21 --dport 443
1 flow entries have been deleted.
What it means: Now a new SYN must traverse your current rules. If the block works only after this, your rules were fine; your test was not.
Decision: Adjust your validation method and consider whether you need explicit resets/drops for established flows during security events.
Task 17: Verify policy routing / VRFs aren’t bypassing your expected interface
cr0x@server:~$ ip rule show
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
What it means: If you have additional rules here, traffic may take a different egress path than you expect, which changes which firewall rules are relevant (especially for OUTPUT and FORWARD).
Decision: If policy routing is present, ensure firewall rules match the actual path, not the diagram in someone’s slide deck.
Three corporate-world mini-stories (anonymized, accurate, mildly painful)
Mini-story 1: The incident caused by a wrong assumption
A mid-size company ran a Proxmox cluster supporting internal tooling: CI runners, artifact storage, and a handful of “temporary” VMs that had been temporary since the previous fiscal year. A security review flagged that several VMs were reachable on ports they shouldn’t be. The infra team responded fast: enable Proxmox firewall at the datacenter level, apply a default deny inbound to the VM subnet, then poke holes for SSH from the jump box.
They tested by running iptables -S and saw the expected chains. Great. They ran a port scan and still saw open ports. Not great. The conclusion, delivered with full confidence, was that Proxmox firewall “doesn’t work reliably” and the change was rolled back.
The wrong assumption: they were reading iptables-legacy rules, while actual filtering lived in nftables. The host had been upgraded in-place. Some early scripts pinned to legacy tools, and over time the node had drifted into split-brain: legacy tables contained the “right” rules, nftables was basically permissive, and live traffic was following nftables hooks. Their “verification” only verified a dead language.
Once they standardized alternatives to iptables-nft, disabled any legacy scripts, and forced pve-firewall to re-apply, the firewall behaved perfectly—because it had been fine all along. The incident wasn’t “Proxmox firewall is flaky.” The incident was “we were looking at the wrong control plane and then we trusted ourselves.”
The durable fix was boring: a node bootstrap check that fails the build if iptables alternatives differ across the cluster. That single guardrail prevented the exact same drama during the next upgrade window.
Mini-story 2: The optimization that backfired
Another org ran multi-tenant dev environments on Proxmox. They wanted “faster firewall apply” because developers complained that provisioning a VM took too long. Someone noticed pve-firewall re-applied a lot of rules frequently, especially with frequent config churn. The “optimization”: introduce an nftables base config with flush ruleset and a minimal allowlist, then let Proxmox add its own tables after.
In a lab, it looked clean. In production, it created a race: nftables.service would reload during network events and wipe the ruleset. Proxmox would eventually re-apply, but “eventually” is not a security control. Between the flush and the re-apply, VMs were briefly exposed. Not for hours, but long enough that the logs were ugly and the auditors suddenly had opinions.
The backfire wasn’t just the flush. It was the existence of two competing authorities over the same kernel object. nftables.service believed it owned the truth. Proxmox believed it owned the truth. The kernel executed whichever truth was last written.
The team reverted to a single owner model: Proxmox generates firewall state; nftables.service is disabled; any baseline host hardening is expressed as Proxmox datacenter/node rules. “Fast apply” was achieved by reducing needless rule churn (fewer per-VM micro-rules, more use of sets and groups) rather than building a second control plane.
The lesson: if your optimization relies on flushing a live ruleset, you didn’t optimize; you introduced gambling with extra steps.
Mini-story 3: The boring but correct practice that saved the day
A regulated business ran three Proxmox clusters with identical roles. They had a change policy: before enabling firewall globally, they stage in “log-only” mode, validate counters, then enforce. People grumbled that it was slow and bureaucratic. It was also exactly why the rollout didn’t become a late-night incident.
In staging, they noticed something subtle: VM-to-VM traffic on the same bridge wasn’t hitting the expected filter path on one cluster. Counters didn’t move. Logs didn’t show drops. The rules were there—but irrelevant.
Because they were measuring before enforcing, they found that br_netfilter sysctls were off on that cluster. A well-meaning kernel tuning profile had disabled bridge netfilter years earlier to “reduce overhead.” The overhead reduction was real. So was the security bypass.
They fixed sysctls, confirmed counters incremented for test traffic, then proceeded with enforcement. No surprises, no “why is everything down,” and no emergency exceptions that live forever. The boring practice—instrument first, enforce second—saved them from shipping a placebo firewall.
Common mistakes: symptom → root cause → fix
-
Symptom: Proxmox GUI shows firewall enabled; traffic is not blocked.
Root cause: Rules applied to nftables but you’re inspecting iptables-legacy (or vice versa).
Fix: Standardize alternatives (iptables-nft recommended), then restartpve-firewalland validate withnft list ruleset. -
Symptom: Rules apply, then disappear after minutes or after network reload.
Root cause:nftables.serviceloads/etc/nftables.confwithflush ruleset, or another manager rewrites tables.
Fix: Disable competing service or remove flush/replace behavior; choose a single owner for the ruleset. -
Symptom: VM firewall rules don’t affect VM-to-VM traffic on the same bridge.
Root cause: Bridge netfilter sysctls disabled (net.bridge.bridge-nf-call-iptables=0).
Fix: Enablebr_netfilterand sysctls persistently; re-test with counters/logging. -
Symptom: “I blocked port X, but my session is still open.”
Root cause: Conntrack allows established flows; your rule affects new connections only.
Fix: Test with new connections, or flush specific conntrack entries, or adjust rule order if you truly must kill established flows. -
Symptom: Only one node in the cluster enforces rules; others don’t.
Root cause: Node drift: different alternatives (legacy vs nft), different sysctls, different enabled services.
Fix: Compare nodes: alternatives, sysctls, loaded modules, enabled services; standardize via config management. -
Symptom: NAT/port forwards behave differently after enabling firewall.
Root cause: Mixed iptables/nft NAT tables; docker or other components inserting NAT rules; ordering differences.
Fix: Consolidate NAT ownership; inspect nft nat table and ensure Proxmox rules don’t conflict with container runtimes. -
Symptom: Host management access gets blocked unexpectedly.
Root cause: Applying a “deny inbound” at datacenter level without explicit allow for SSH/GUI/cluster/storage networks, or applying to the wrong interface.
Fix: Always stage log-only; add explicit allows for management plane first; validate interface bindings. -
Symptom: Firewall logs show drops, but traffic still passes.
Root cause: You’re logging one path (e.g., INPUT) while traffic flows through FORWARD/bridge path, or IPv6 path is different.
Fix: Identify actual hook with interface and direction; check inet family rules; ensure IPv6 is included if used.
Checklists / step-by-step plan
Checklist A: Standardize a Proxmox node to one firewall backend
- Pick your backend: in modern Proxmox/Debian, choose iptables-nft/nftables unless a vendor requirement forces legacy.
- Check alternatives for iptables/ip6tables/ebtables/arptables.
- Remove or disable legacy-only scripts that call
iptables-legacy. - Disable competing firewall managers (ufw/firewalld) unless you integrate them deliberately.
- Restart
pve-firewalland verify the active rules via nft.
Checklist B: Validate that VM traffic actually hits netfilter
- Confirm bridge path: traffic uses
vmbrXand tap/veth devices you expect. - Ensure
br_netfiltermodule is loaded. - Enable bridge sysctls for iptables/ip6tables if you rely on host-level filtering for bridged traffic.
- Use counters/log rules to prove packets match.
Checklist C: Test firewall changes without fooling yourself
- Use a new TCP connection (new source port) or a different client.
- Check counters for the specific rule/chain.
- If needed, delete specific conntrack entries for clean re-tests.
- Validate both IPv4 and IPv6 behavior; “works on v4” isn’t the same as “secure.”
Step-by-step remediation plan (the one I’d run in production)
- Freeze rule writers: temporarily stop/disable non-Proxmox firewall managers (nftables.service, ufw, firewalld) to prevent churn while diagnosing.
- Confirm backend: check
update-alternatives --display iptablesand agree on nft vs legacy across all nodes. - Inspect the active ruleset: use
nft list rulesetand ensure Proxmox-generated structures appear and counters increment under test. - Fix bridging visibility: ensure
br_netfilter+ sysctls align with your filtering model. - Re-enable only what you need: let Proxmox own firewall state; reintroduce other tooling only if it’s strictly necessary and non-overlapping.
- Lock in consistency: codify alternatives/sysctls/service enablement in config management; add a drift check in your node health validation.
FAQ
1) Why do Proxmox firewall rules “not apply” even though the GUI says enabled?
Because the GUI is a configuration interface, not the kernel. The most common cause is backend mismatch: rules are applied to nftables while you inspect legacy iptables, or another service flushes/overwrites rules after Proxmox applies them.
2) Is nftables better than iptables for Proxmox?
In 2025: yes, operationally, because it’s the modern default on Debian-based systems and avoids legacy tooling drift. The wrong answer is “both,” unless you enjoy debugging split-brain rule stacks.
3) Can iptables and nftables run at the same time?
Unfortunately, yes. You can have iptables-legacy rules and nftables rules simultaneously. That’s how you get “I see the rule but it doesn’t work.” Pick one backend and standardize.
4) How do I know whether iptables is using nftables under the hood?
Use update-alternatives --display iptables. If it points to /usr/sbin/iptables-nft, your iptables commands are programming nftables-compatible rules.
5) My VM-to-VM traffic isn’t filtered. Is Proxmox firewall broken?
Usually not. VM-to-VM on the same bridge is L2 switching behavior. If bridge netfilter sysctls are disabled, netfilter won’t see those packets. Enable br_netfilter and the relevant sysctls if your security model relies on host filtering.
6) Why did blocking a port not disconnect existing sessions?
Conntrack. Many rules accept established,related early for performance and stability. Your new block affects new connections. Test with a new connection or delete the specific conntrack entry if you need a clean validation.
7) Should I run nftables.service on a Proxmox host?
If Proxmox firewall is your authoritative manager, typically no. Running nftables.service with a config that flushes or replaces rules will conflict. If you must run it, ensure it does not wipe or override Proxmox-managed tables and that startup ordering is deterministic.
8) Why does it work on one node but not another in the same cluster?
Cluster config may be identical while local node state differs: alternatives selection, enabled services, sysctls, kernel modules, or third-party agents. Compare node-local settings; don’t assume the cluster makes everything uniform.
9) Do I need to care about IPv6 if my network is “IPv4 only”?
If IPv6 is enabled on interfaces, services may bind on IPv6 and become reachable. Either manage IPv6 rules explicitly (preferred) or disable IPv6 deliberately and consistently. “We don’t use IPv6” is often just “we don’t monitor IPv6.”
Conclusion: next steps that actually stick
When Proxmox firewall rules don’t apply, treat it like any other production reliability problem: establish the actual control plane, confirm ownership, and prove packet path with counters—not vibes.
Practical next steps:
- Decide your firewall backend (nftables/iptables-nft is the sane default) and standardize
update-alternativesacross all nodes. - Make Proxmox the single source of truth for host/VM filtering, unless you have a carefully engineered reason to split ownership.
- Audit and remove rule churn: disable nftables.service/ufw/firewalld on PVE hosts unless integrated; watch for docker/kube rule insertion.
- Validate bridging behavior with
br_netfiltersysctls and real counters, especially for VM-to-VM traffic. - Improve your testing: new connections, conntrack awareness, and explicit verification of IPv4/IPv6.
Do this once, write it down, and your future self won’t have to “rediscover” the difference between a rule existing and a rule being enforced.