Debian 13: iptables vs nftables conflict — stop the silent firewall war

Was this helpful?

You deploy a clean Debian 13 host, open the one port you need, and everything works. Then a week later,
an innocuous package upgrade lands, a container runtime updates, or someone “just adds a quick iptables rule,”
and suddenly your firewall becomes a haunted house. Traffic drops. Logs lie. Half the commands show nothing.

What’s happening is not supernatural. It’s two front-ends (iptables and nftables) trying to manage the same
Netfilter hooks, often through compatibility shims, sometimes directly, and occasionally with a third party
(Docker, Kubernetes, UFW, firewalld) randomly picking a side. The result is a silent firewall war: rules exist,
but not where you’re looking.

The actual problem: one kernel, multiple control planes

On Debian 13, you can have:

  • nftables managing rules directly via nft and nftables.service
  • iptables commands that might be:
    • iptables-nft: the iptables CLI translating into nft rules
    • iptables-legacy: the old iptables backend using legacy kernel interfaces
  • One or more higher-level managers: UFW, firewalld, Docker, Kubernetes, Podman, fail2ban

None of these are inherently “bad.” The failure mode happens when you assume they’re all looking at the same
rule database. They’re not. Sometimes they’re looking at different backends. Sometimes they’re writing to the same
backend but into different tables/chains with priorities you didn’t anticipate. Sometimes your rules are fine and the
packets never reach them because offload, bridging, policy routing, or conntrack state changes the game.

The operational sin is letting two authorities manage firewall policy. Firewalls are like change control: you only
get one boss.

Facts and history: why we ended up here

A few concrete facts help you debug faster, because they explain why “iptables says X” can be meaningless on a modern system.

  1. Netfilter is the kernel framework; iptables and nftables are user-space front-ends. The fight is mostly in user space, but the pain is in the packet path.
  2. nftables was introduced to replace iptables. It consolidates IPv4/IPv6, supports sets/maps more naturally, and reduces rule explosion in common cases.
  3. iptables-nft exists so legacy tooling still works. It translates iptables commands into nftables rules, but the mapping is not always “transparent” when you inspect.
  4. iptables-legacy can still be installed and selected. On a host where some components use legacy and others use nft, you’re guaranteed confusion and occasional rule divergence.
  5. UFW historically spoke iptables. Some environments bolt UFW onto an nftables system and assume it becomes “the firewall.” Sometimes it is. Sometimes it is a second firewall.
  6. firewalld moved toward nftables backends. But not every version and not every distribution defaults the same way, and “what backend is active” matters.
  7. Docker popularized automatic iptables programming. It will install NAT and filter rules unless told not to, and it expects iptables semantics even when the backend is nft.
  8. Kubernetes traditionally assumes iptables availability. Many CNI plugins still manipulate iptables rules; on nft backends it’s usually fine—until it isn’t.
  9. nftables chain priority is a first-class concept. In iptables, the built-in chain order is mostly fixed. In nftables, you can create multiple chains on the same hook with priorities; that’s powerful and a great way to shoot yourself in the foot.

Here’s the one reliability quote worth keeping on a sticky note:

“Hope is not a strategy.” — Gene Kranz

Fast diagnosis playbook (do this first)

When packets are dropping or ports look “open but not reachable,” don’t start by editing rules. Start by proving who owns the firewall and where packets are being evaluated.

1) Identify the active firewall control plane

  • Check whether nftables is loaded and whether the service is enabled/active.
  • Check which iptables backend your system is using (nft vs legacy).
  • Check whether Docker/Kubernetes/UFW/firewalld are installed and manipulating rules.

2) Inspect the kernel’s effective rules, not your assumptions

  • Dump the full nft ruleset.
  • List iptables rules with counters.
  • Look for duplicate or conflicting chains (especially NAT) and unexpected default policies.

3) Validate packet flow with counters and targeted captures

  • Use counters (iptables -v, nft counter).
  • Use conntrack to check state.
  • Use tcpdump at ingress and egress to see where it dies.

4) Only then change policy

  • Pick one manager and one backend.
  • Remove or disable the other managers.
  • Make the policy persistent, and test reboot behavior.

Joke #1: Firewalls are like office kitchens—everyone thinks they’re “just helping,” and then nothing works and it smells weird.

A practical mental model: iptables, nftables, and the kernel

The kernel has a packet path with well-defined hooks: ingress, prerouting, input, forward, output, postrouting (simplifying slightly).
Netfilter provides these hooks. User-space programs create rules that attach to those hooks. The important part: there can be multiple
rule sets and multiple chains attached to the same hook, and they run in order.

iptables (classic) is organized into tables (filter, nat, mangle, raw, security) and built-in chains (INPUT, OUTPUT, FORWARD, PREROUTING, POSTROUTING).
nftables is more flexible: you create tables and chains, then declare which hook and priority each chain uses.

On Debian 13, the most common operational reality is:

  • Your iptables command is actually iptables-nft by default, writing to nftables behind the scenes.
  • nft list ruleset is the ground truth for the rules in effect (if you are on nft backend).
  • If you accidentally flip to iptables-legacy for some component, you can have “two different firewalls,” both reporting success.

The conflict pattern looks like this:

  • You run iptables -S and see “nothing scary.”
  • You run nft list ruleset and see a novel-length policy you didn’t write.
  • Or worse: iptables shows a policy, nft shows a different one, and packets obey whichever backend is actually hooked.

Hands-on tasks: commands, outputs, decisions

The tasks below are designed to be run on Debian 13 hosts during an incident, during a migration, or during a sanity check.
Each one includes: a command, what typical output means, and the decision you make next. Don’t skim the decisions; that’s where outages stop.

Task 1 — Identify which iptables backend is selected

cr0x@server:~$ update-alternatives --display iptables
iptables - auto mode
  link best version is /usr/sbin/iptables-nft
  link currently points to /usr/sbin/iptables-nft
  link iptables is /usr/sbin/iptables
  slave iptables-restore is /usr/sbin/iptables-restore
  slave iptables-save is /usr/sbin/iptables-save
/usr/sbin/iptables-legacy - priority 10
/usr/sbin/iptables-nft - priority 20

What it means: If “currently points to iptables-nft,” then iptables commands are translating to nftables.
If it points to iptables-legacy, you’re using the legacy backend.

Decision: Pick one backend across the host. For Debian 13, prefer iptables-nft unless you have a hard requirement for legacy.
Mixed backends are how you get phantom rules.

Task 2 — Check if nftables service is enabled and active

cr0x@server:~$ systemctl status nftables --no-pager
● nftables.service - nftables
     Loaded: loaded (/lib/systemd/system/nftables.service; enabled; preset: enabled)
     Active: active (exited) since Sun 2025-12-28 10:41:12 UTC; 2h 3min ago
       Docs: man:nft(8)
             man:nftables(8)
    Process: 512 ExecStart=/usr/sbin/nft -f /etc/nftables.conf (code=exited, status=0/SUCCESS)
   Main PID: 512 (code=exited, status=0/SUCCESS)

What it means: “active (exited)” is normal; it loads rules then exits. “enabled” means rules will load on boot from /etc/nftables.conf.

Decision: If nftables is enabled, treat /etc/nftables.conf as the desired source of truth—or disable nftables if you’re delegating to another manager.

Task 3 — Dump the effective nft ruleset (ground truth)

cr0x@server:~$ nft list ruleset
table inet filter {
  chain input {
    type filter hook input priority 0; policy drop;
    ct state established,related accept
    iif "lo" accept
    tcp dport { 22, 443 } accept
    counter packets 10234 bytes 811234 drop
  }
}

What it means: The effective policy for inet filter input is drop, with explicit accept rules.
The counter on the drop rule tells you packets are hitting it.

Decision: If your expected port is not present here, stop editing iptables scripts and fix the nft ruleset or the manager that generates it.

Task 4 — See iptables rules with counters (even on nft backend)

cr0x@server:~$ iptables -L -n -v
Chain INPUT (policy DROP 120 packets, 9600 bytes)
 pkts bytes target     prot opt in  out source      destination
 8123  640K ACCEPT     all  --  lo  *   0.0.0.0/0   0.0.0.0/0
  553 42012 ACCEPT     tcp  --  *   *   0.0.0.0/0   0.0.0.0/0   tcp dpt:22
  102  6120 ACCEPT     tcp  --  *   *   0.0.0.0/0   0.0.0.0/0   tcp dpt:443

What it means: If this output matches the nft ruleset, you’re likely on iptables-nft and seeing translated views.
If it does not match nft output, you may have a split-brain backend situation.

Decision: If nft and iptables disagree, immediately check update-alternatives for both iptables and ip6tables, and audit who’s installing rules.

Task 5 — Verify ip6tables backend too (IPv6 is where surprises hide)

cr0x@server:~$ update-alternatives --display ip6tables
ip6tables - auto mode
  link best version is /usr/sbin/ip6tables-nft
  link currently points to /usr/sbin/ip6tables-nft
  link ip6tables is /usr/sbin/ip6tables
  slave ip6tables-restore is /usr/sbin/ip6tables-restore
  slave ip6tables-save is /usr/sbin/ip6tables-save
/usr/sbin/ip6tables-legacy - priority 10
/usr/sbin/ip6tables-nft - priority 20

What it means: You can be nft for IPv4 and legacy for IPv6 if someone “fixed one thing.” That’s not a fix; it’s a time bomb.

Decision: Align IPv4 and IPv6 backends. If you’re not ready for IPv6 policy, explicitly drop or explicitly allow—don’t “forget it exists.”

Task 6 — Find who is managing firewall rules (package-level)

cr0x@server:~$ dpkg -l | egrep 'nftables|iptables|ufw|firewalld|fail2ban|docker|kube|podman' | awk '{print $1,$2,$3}'
ii nftables 1.0.9-1
ii iptables 1.8.10-2
ii ufw 0.36.2-2
ii docker.io 26.0.2+dfsg1-1
ii fail2ban 1.1.0-1

What it means: The host has multiple potential writers: UFW, Docker, fail2ban. Any of them can add or reload rules.

Decision: Choose one “policy author.” If you want nftables-native policy, configure Docker/fail2ban accordingly or constrain them. Otherwise accept that they own parts of the ruleset.

Task 7 — Check if Docker is programming iptables

cr0x@server:~$ grep -R "iptables" /etc/docker/daemon.json 2>/dev/null || echo "no daemon.json iptables setting"
no daemon.json iptables setting

What it means: Default behavior is typically “Docker manipulates iptables.” Whether that’s acceptable depends on your model.

Decision: If you need strict nftables control, consider setting Docker "iptables": false and implementing your own NAT/forwarding rules. If you don’t have time to do that safely, don’t half-do it.

Task 8 — Inspect nftables for Docker-created tables/chains

cr0x@server:~$ nft list tables
table inet filter
table ip nat
table ip filter

What it means: Seeing table ip nat and table ip filter is normal, but if you didn’t create them, something else did.
Docker commonly injects NAT rules.

Decision: If external tooling injects rules, document it and monitor it. The worst state is “unknown automation.”

Task 9 — Check chain priorities and hooks (nftables ordering issues)

cr0x@server:~$ nft -a list chain inet filter input
table inet filter {
  chain input { # handle 5
    type filter hook input priority 0; policy drop;
    ct state established,related accept # handle 12
    iif "lo" accept # handle 13
    tcp dport 22 accept # handle 14
    counter packets 10234 bytes 811234 drop # handle 20
  }
}

What it means: Hook input priority 0. If you have multiple chains attached to the same hook with different priorities,
you need to know the order because a drop in a higher-priority chain can prevent later accepts from running.

Decision: If multiple chains share a hook, consolidate or document priorities. “Two chains on input hook” is not automatically wrong; it is automatically suspicious.

Task 10 — Prove packets hit the expected rule (counters)

cr0x@server:~$ nft list ruleset | sed -n '1,120p'
table inet filter {
  chain input {
    type filter hook input priority 0; policy drop;
    ct state established,related accept
    iif "lo" accept
    tcp dport 443 counter packets 0 bytes 0 accept
    counter packets 10234 bytes 811234 drop
  }
}

What it means: If your 443 accept counter stays at 0 while clients fail to connect, either traffic isn’t arriving,
it’s arriving on a different interface/stack, or it’s being dropped earlier (priority, raw table, ingress), or a routing issue.

Decision: Move to packet capture and routing checks; stop editing allow rules blindly.

Task 11 — Confirm routing and local listening (not a firewall problem)

cr0x@server:~$ ss -lntp | egrep ':(22|443)\s'
LISTEN 0      4096         0.0.0.0:22        0.0.0.0:*    users:(("sshd",pid=710,fd=3))
LISTEN 0      4096         0.0.0.0:443       0.0.0.0:*    users:(("nginx",pid=1188,fd=7))

What it means: Services are actually listening. If this is empty, the firewall is innocent and you’re hunting the wrong animal.

Decision: If not listening, fix the service. If listening, continue with firewall/routing checks.

Task 12 — Packet capture: does traffic reach the interface?

cr0x@server:~$ sudo tcpdump -ni eth0 tcp port 443 -c 5
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
10:55:12.112233 IP 203.0.113.10.52344 > 192.0.2.20.443: Flags [S], seq 123456789, win 64240, options [mss 1460,sackOK,TS val 1 ecr 0,nop,wscale 7], length 0
10:55:13.112244 IP 203.0.113.10.52344 > 192.0.2.20.443: Flags [S], seq 123456789, win 64240, options [mss 1460,sackOK,TS val 2 ecr 0,nop,wscale 7], length 0

What it means: SYNs arrive. If you don’t see SYN-ACKs leaving, it’s likely local drop, local routing, or service not responding.

Decision: If SYN arrives, check nft/iptables counters and policy again, plus conntrack. If SYN does not arrive, you’re in network land (upstream ACLs, routing, security groups, etc.).

Task 13 — Inspect conntrack state for a stuck flow

cr0x@server:~$ sudo conntrack -L | head
tcp      6 431999 ESTABLISHED src=203.0.113.10 dst=192.0.2.20 sport=52344 dport=443 src=192.0.2.20 dst=203.0.113.10 sport=443 dport=52344 [ASSURED] mark=0 use=1

What it means: Conntrack sees established flows. If you see lots of SYN_SENT or TIME_WAIT churn,
it hints at drops, asymmetric routing, or application refusal.

Decision: If conntrack shows established but app still fails, suspect application or MTU issues. If conntrack shows SYNs without completion, suspect firewall or upstream.

Task 14 — Check for a policy drop you didn’t mean (nftables default policy)

cr0x@server:~$ nft list chain inet filter input | sed -n '1,60p'
chain input {
  type filter hook input priority 0; policy drop;
  ct state established,related accept
  iif "lo" accept
  tcp dport { 22 } accept
}

What it means: Default drop and no accept for your port. That’s not a bug; it’s exactly what you asked for, or what someone asked for on your behalf.

Decision: Add the port in the correct control plane. If a manager generates this file, do not edit it by hand unless you like surprises at reboot.

Task 15 — Detect legacy rules still present (split-brain suspicion)

cr0x@server:~$ iptables-legacy -L -n -v 2>/dev/null | head
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in  out source      destination

What it means: If legacy shows a different policy than your active nft-based system, you likely have tooling writing to different backends,
or a leftover configuration that will bite during a failover or reboot.

Decision: If you are standardizing on nft, purge or neutralize legacy usage (alternatives and services). If you must stay legacy, stop using nftables service.

Task 16 — Audit who last touched firewall state (logs)

cr0x@server:~$ journalctl -u nftables -u docker -u ufw --since "today" --no-pager | tail -n 30
Dec 28 09:01:10 server systemd[1]: Starting nftables...
Dec 28 09:01:10 server nft[512]: /etc/nftables.conf:23:3-45: Warning: deprecated syntax
Dec 28 09:01:10 server systemd[1]: Finished nftables.
Dec 28 09:14:22 server dockerd[844]: time="2025-12-28T09:14:22.012345678Z" level=info msg="Loading containers: done."
Dec 28 09:14:22 server dockerd[844]: time="2025-12-28T09:14:22.112345678Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16."

What it means: nftables loaded at boot; Docker later initialized and likely installed rules too. That ordering can matter.

Decision: If a post-boot service overwrites or appends firewall rules, you must either control its behavior or bake its expectations into your policy.

Three corporate mini-stories from the trenches

Mini-story #1: The incident caused by a wrong assumption

A mid-sized company standardized on Debian for edge reverse proxies. They had a simple rule: “Everything is nftables now.”
Someone built a clean nft policy, put it in /etc/nftables.conf, and enabled the service. It passed staging.
The rollout was calm enough that people started using the word “boring” as a compliment.

A few months later, a new team added an internal scanning agent that “requires iptables.” They installed a package that pulled in iptables tooling
and a helper script that ran iptables -I INPUT -p tcp --dport 9100 -j ACCEPT on boot, because that’s what it always did on older images.
Nobody thought twice. They saw the rule in iptables -L. They even documented it in the runbook.

Then the first real incident: some nodes randomly refused connections to the exporter port, but only after reboots.
During the incident call, people kept repeating: “But iptables shows the accept rule.” That was true—on those nodes, iptables had flipped to legacy due to an alternatives change.
Meanwhile nftables had loaded a default-drop input chain and never saw the iptables rule.

The outage wasn’t dramatic. No customer-facing errors. Just missing metrics, then bad autoscaling decisions, then capacity alarms.
The punchline was organizational: the platform team thought “iptables is a legacy view into nftables.” The app team thought “iptables is the firewall.”
Both were wrong, just in different ways.

The fix was blunt: they banned ad-hoc iptables usage on those hosts, pinned alternatives to iptables-nft, and wrote one documented method to open ports:
a change to the nftables repo, reviewed like code. They also added a CI check that compares nft list ruleset against expected chain hooks and default policies.

Mini-story #2: The optimization that backfired

Another shop ran a busy container platform on Debian hosts. They were chasing latency, and one engineer suggested “simplifying firewall layers”
by turning off Docker’s iptables programming and managing everything in nftables directly. On paper, it looked clean: one policy file, no surprises,
no random chains appearing.

They flipped Docker’s "iptables": false setting and rolled it gradually. It worked in development. It worked in staging.
Then production started reporting intermittent “can’t reach external service” errors for some containers, especially after node rotation.
The first instinct was DNS. The second was routing. The third was blaming the app.

The issue was NAT and forwarding. Docker had been quietly doing a lot of necessary plumbing: MASQUERADE rules, FORWARD accept rules for established flows,
and some bridge isolation behavior. Their handcrafted nftables policy covered the happy path but missed a few edge cases, including traffic from certain bridges
and hairpin connections. Conntrack made it worse by making failures look inconsistent, depending on state.

They recovered by re-enabling Docker’s iptables integration on production while they rebuilt the nft policy with explicit container networking requirements.
The “optimization” itself wasn’t wrong; the rollout plan was. They treated network policy like a config toggle instead of a system design change.

Lesson: if you disable a tool’s firewall automation, you inherit its responsibilities. That’s not inherently bad, but it’s never free.

Mini-story #3: The boring but correct practice that saved the day

A regulated enterprise ran Debian-based bastion hosts for admin access. The rules were strict: default drop, explicit allow lists, and a requirement that
every firewall change is traceable. Their security team insisted that the firewall policy be declarative, stored in a repo, and deployed by automation.
Everyone complained. Then they got used to it.

One day a maintenance reboot storm hit after a kernel update. Several teams reported “SSH is down” on a subset of bastions.
It smelled like a firewall issue, because everything else was healthy and the hypervisor networking looked fine.

The on-call pulled the host console, ran nft list ruleset, and compared it to the last known good policy from the repo.
It matched. Counters showed incoming SYNs were accepted. The firewall was not the culprit.
The issue ended up being a mis-ordered network interface rename that changed which interface matched the allow list rules.

Because the policy was declarative, versioned, and identical across hosts, they could rule out “random drift” in minutes.
That shortened the incident and prevented the classic mistake: people adding emergency allow rules that would later become permanent, undocumented holes.

The boring practice wasn’t the repo. It was the discipline to keep one source of truth and test it on reboot.
That’s how you keep a firewall from becoming folklore.

Stop the war: pick a control plane and migrate cleanly

You have three viable end states on Debian 13. Two are sane. One is a trap.

  • Sane end state A: nftables-native policy, managed by /etc/nftables.conf (or a generated file), with nftables service enabled. iptables used only as a view/compat (iptables-nft), not as an author.
  • Sane end state B: a higher-level manager (firewalld or UFW) is the single authority, and everything else is configured to cooperate. You still must know which backend it uses.
  • The trap: “A bit of nftables, plus some iptables rules, plus Docker does its thing, plus fail2ban adds a sprinkle.” That is not defense-in-depth. That is operational roulette.

What I recommend for most Debian 13 servers

If this is a server you operate as infrastructure (VMs, bare metal, edge, databases, load balancers), prefer nftables-native.
It’s consistent, inspectable, and scriptable. Then decide how you handle container tooling:

  • If the host runs containers and you want fast safety: allow Docker to manage its rules (with iptables-nft), and treat Docker as part of the firewall system. Document what it owns.
  • If the host is security-sensitive and you need strict policy: disable Docker iptables only if you have the time and expertise to recreate required NAT/forwarding safely.

How to “choose one backend” without drama

Your goal is: one backend (nft) and one author (nftables service or a single manager).
The most common cause of conflict is not that nftables exists—it’s that something still writes legacy rules or reloads them at odd times.

Switch iptables alternatives to nft (if you want nft backend)

cr0x@server:~$ sudo update-alternatives --set iptables /usr/sbin/iptables-nft
update-alternatives: using /usr/sbin/iptables-nft to provide /usr/sbin/iptables (iptables) in manual mode
cr0x@server:~$ sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
update-alternatives: using /usr/sbin/ip6tables-nft to provide /usr/sbin/ip6tables (ip6tables) in manual mode

Decision: After this, any iptables-based tooling should be writing nft rules through the compat layer, not legacy.
If you still see divergence, something is explicitly calling iptables-legacy.

Make nftables policy persistent (and explicit)

A minimal, explicit nftables file is better than a clever one. Clever firewalls are how people end up with “temporary” rules that last two years.
Debian typically loads /etc/nftables.conf if nftables service is enabled.

Also: if you’re migrating from iptables scripts, do not try to translate everything mechanically on day one. Start with:
established/related accept, loopback accept, required inbound ports, and required outbound policy. Then expand.

Joke #2

Mixing iptables-legacy and nftables is like running two payroll systems: you will pay everyone, just not always the right amount.

Common mistakes: symptoms → root cause → fix

1) “iptables -L looks fine, but traffic is still blocked”

Symptoms: Port appears allowed in iptables output; clients still time out; nft counters show drops or rules you didn’t expect.

Root cause: iptables is using one backend (often legacy), while the effective rules are in nftables (or vice versa). Or you’re inspecting the wrong family (ip vs inet).

Fix: Check alternatives for iptables/ip6tables. Dump nft list ruleset. Standardize on iptables-nft or nft-only. Remove legacy usage and stop duplicate managers.

2) “nft list ruleset is empty after reboot”

Symptoms: Rules exist right after you apply them, then vanish after reboot; nftables service inactive/disabled.

Root cause: You loaded rules manually (interactive nft) but didn’t persist them, or nftables.service is disabled, or a manager overwrote rules after boot.

Fix: Enable nftables service and ensure /etc/nftables.conf is correct. Audit post-boot services like Docker/UFW/firewalld for reload behavior.

3) “Docker networking broke when we ‘simplified’ the firewall”

Symptoms: Containers can’t reach outside networks or can’t accept inbound connections; random connectivity depending on node lifecycle.

Root cause: Disabled Docker’s iptables integration without replacing NAT/forwarding/bridge rules equivalently in nftables.

Fix: Re-enable Docker iptables until you have a tested nftables policy for container networking. Validate FORWARD chain behavior, NAT masquerade, and conntrack state.

4) “IPv4 works, IPv6 is wide open (or completely dead)”

Symptoms: Security scans show unexpected IPv6 exposure; or IPv6 clients fail while IPv4 works.

Root cause: Only configured iptables (v4) and forgot ip6tables/nft inet family; or backends differ between IPv4 and IPv6.

Fix: Use nft table inet for unified policy. Align alternatives for both iptables and ip6tables. Decide explicit IPv6 stance.

5) “Rules exist but never match”

Symptoms: Accept rules show zero counters; tcpdump sees traffic; clients fail anyway.

Root cause: Chain priority/hook mismatch; traffic is being handled in a different hook (ingress vs input), or dropped earlier in a higher-priority chain, or you’re filtering on the wrong interface (renames, bonds, VLANs).

Fix: List chains with hooks and priorities. Consolidate chains for the same hook. Confirm interface names and consider matching by address/subnet when appropriate.

6) “Everything breaks only after fail2ban starts”

Symptoms: SSH brute force protection works, but legitimate users get blocked; firewall rules get huge over time.

Root cause: fail2ban writing to iptables while you manage nftables (or vice versa); bans persist inconsistently; backend mismatch causes stale or ineffective bans.

Fix: Configure fail2ban to use nftables actions when nftables is the chosen control plane, or isolate fail2ban to the same backend consistently.

Checklists / step-by-step plan

Checklist A: Standardize a Debian 13 host on nftables without surprises

  1. Inventory writers: Docker, Kubernetes, UFW, firewalld, fail2ban.
  2. Pick the owner: nftables native (recommended) or one manager.
  3. Align iptables/ip6tables alternatives to nft:
    • update-alternatives --set iptables /usr/sbin/iptables-nft
    • update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
  4. Enable nftables service and ensure the config file is the single source of truth.
  5. Apply policy in a maintenance window; keep an out-of-band console ready.
  6. Validate: inbound ports, outbound connectivity, container flows (if applicable), IPv6 stance.
  7. Reboot test. Always. If you don’t reboot test, you’re testing your luck.
  8. Monitor counters and log drops (sparingly) for a few days.

Checklist B: If you must keep iptables-style tooling

  1. Still standardize on iptables-nft unless you have a proven blocker.
  2. Don’t run nftables.service with a separate policy file that conflicts with iptables-managed rules.
  3. Make sure your inspection commands match your backend:
    • iptables-nft backend: inspect with both iptables -L -v and nft list ruleset.
    • legacy backend: inspect with iptables-legacy explicitly to avoid self-deception.
  4. Document which service loads rules on boot (iptables-persistent, scripts, systemd units).
  5. Prove compatibility with Docker/Kubernetes before rollout.

Checklist C: Incident response when firewall ownership is unclear

  1. Confirm the service is listening (ss -lntp).
  2. Confirm traffic arrives (tcpdump on the interface).
  3. Dump nft ruleset and iptables rules with counters.
  4. Check alternatives and look for legacy rules divergence.
  5. Check logs for services reloading rules after boot.
  6. Stop the bleeding: temporarily allow only what you need in the active backend, then schedule a cleanup.
  7. After incident: remove duplicate managers and codify policy ownership.

FAQ (real questions from real outages)

1) Is nftables “better” than iptables on Debian 13?

For most server use, yes: nftables is the modern interface, supports unified inet rules, and is the direction the ecosystem is moving.
But “better” doesn’t matter if your tooling expects iptables semantics; then the best choice is the one you can operate consistently.

2) Why does iptables -L show rules that I can’t find in my nft config file?

Because those rules may be generated by another service (Docker, UFW, fail2ban), or they may be the iptables-nft translated view of nftables state.
The config file is not necessarily the origin; it’s just one possible source.

3) Can iptables and nftables coexist safely?

They can coexist if iptables is the nft backend (iptables-nft) and you treat it as a compatibility interface, not a second author.
They cannot coexist safely if you mix iptables-legacy with nftables policy on the same host. That’s split-brain by design.

4) What’s the fastest way to tell if I’m in split-brain mode?

Compare nft list ruleset with iptables -L -v, then explicitly check iptables-legacy -L.
If legacy shows a different world than nft, you’ve found your ghost.

5) Why do my accept rules have zero counters even though clients are connecting?

Either you’re looking at the wrong place (wrong table/family/hook), or the traffic is being accepted earlier in another chain,
or it’s never reaching that hook (for example, traffic is local output, forwarded, or handled at ingress).
Counters are truth, but only for the rule you’re actually matching.

6) Should I use table inet or separate ip/ip6 tables in nftables?

Use table inet for most policy so you don’t accidentally diverge IPv4 and IPv6 behavior.
Separate tables are fine for advanced cases, but they increase the chance that IPv6 becomes “somebody else’s problem.”

7) What about UFW on Debian 13?

UFW can be operationally fine if it is the single authority and you accept its model.
The problem is layering it on top of an nftables policy or on top of container tooling that also writes rules.
Pick one: UFW-managed firewall, or nftables-managed firewall, not both.

8) How do I migrate from iptables rules to nftables?

Don’t do a 1:1 translation first. Start with a minimal nftables policy that reproduces your security intent,
then add complexity. Validate with counters and traffic tests. If you need to keep iptables tooling temporarily,
standardize on iptables-nft and treat nft output as the canonical state.

9) Why did a reboot “change” the firewall?

Because a different service loaded rules at boot, or a manager applied its own policy after your rules,
or interface names changed and your match criteria no longer matched. Reboot is the great config truth serum.

10) Is logging drops a good idea while debugging?

Yes, selectively. Log everything and you’ll DoS your own disk and your own brain. Add temporary log rules with rate limits,
capture enough to identify the path, then remove them. Keep counters long-term; they’re cheaper and often more useful.

Next steps you can do today

If you’re running Debian 13 in production and you want this problem to stop showing up at 3 a.m., do these three things:

  1. Decide ownership. nftables-native, or one manager. No shared custody.
  2. Prove backend consistency. Align iptables and ip6tables alternatives; confirm you’re not accidentally using legacy on one side.
  3. Operationalize inspection. Add a standard “firewall truth” command set to your runbooks:
    nft list ruleset, iptables -L -v, update-alternatives --display iptables, and targeted tcpdump.

Then do the boring part: reboot a canary host and confirm the rules you think you have are the rules you actually get. That’s the whole game.

← Previous
Microcode: the “firmware” inside your CPU you can’t ignore anymore
Next →
Office VPN + VoIP: How to Reduce Jitter and Latency Over Tunnels

Leave a comment