Debian 13: nftables rules “don’t work” — load order and conflicts, fixed for good

Was this helpful?

You push an nftables change, you can nft list ruleset and see it, traffic behaves for five minutes… then reality returns. After a reboot it’s worse: your “allow SSH” is gone, Docker is doing interpretive dance with your NAT, and the business is asking why the “simple firewall tweak” turned into a stakeholder safari.

This is usually not “nftables being flaky.” It’s almost always load order, conflicting managers, or a ruleset that’s technically loaded but functionally bypassed. Let’s fix it in a way that stays fixed.

What “rules don’t work” actually means

When someone says “nftables doesn’t work,” it usually maps to one of these real failure modes:

  • Ruleset not loaded at all after boot or after a service restart. You’re looking at an empty ruleset or a default placeholder.
  • Ruleset loaded, but overwritten later by another actor (Docker, firewalld, a cloud agent, an iptables restore unit, your own CI job).
  • Ruleset loaded, but not matched because it’s in the wrong hook, wrong family (ip vs inet), wrong chain type, or wrong priority.
  • Rules match, but your expectation is wrong (conntrack state, NAT path, routing policy, bridge vs routed path, local vs forwarded traffic).
  • Kernel path bypasses you due to offloads, XDP programs, VRF/policy routing assumptions, or a different netns than you think you’re filtering.

The trick is to stop arguing with your mental model and interrogate the running system. Debian 13 is a clean platform for nftables—until you accidentally run two firewall orchestrators at once, or you rely on “it loaded once in a shell session so it must persist.”

Joke #1: Firewalls are like meetings—if you don’t control who’s invited, someone will rewrite the agenda right after you leave.

Facts and context worth knowing (so you stop guessing)

  1. nftables replaced iptables as the long-term netfilter frontend in Linux years ago; iptables can be a compatibility wrapper over nft (iptables-nft) or the legacy backend (iptables-legacy).
  2. Debian historically supported both backends because production fleets don’t migrate in a weekend. That means you can accidentally have a “working” iptables view that doesn’t reflect what nft sees.
  3. The inet family is a practical gift: one ruleset can cover IPv4 and IPv6. The catch is that some older examples still use ip and ip6 separately, and mixing them without intent can create gaps.
  4. Chain priorities matter because multiple base chains can attach to the same hook (input/forward/output/prerouting/postrouting) and run in priority order. “My rule exists” doesn’t imply “my rule runs first.”
  5. Docker historically programs firewall/NAT rules automatically and can do it via iptables interfaces. With iptables-nft, that still ends up in nftables—but not in the place you expect.
  6. firewalld is not “just a UI”; it’s a stateful manager that will re-assert its desired rules. If you hand-edit nft rules on a firewalld host, it will eventually disagree with you and win.
  7. nftables rulesets are atomic when loaded from a file: parse errors usually result in “nothing changed,” which is good—unless you’re watching the wrong file and think it loaded.
  8. Netfilter is per network namespace. Containers and some service managers can run in their own netns. You might be loading rules in the root namespace while the traffic you care about lives elsewhere.
  9. Debian’s systemd-based boot is parallel. Ordering is explicit, not magical. If your firewall depends on interfaces, modules, or sysctls, you must encode that ordering or you get “works on my reboot.”

Fast diagnosis playbook

This is the “stop-the-bleeding” sequence I use when someone pings “firewall change didn’t apply” and the clock is loud.

First: prove what rules are actually active

  • Run nft list ruleset and save it. If it’s empty or missing your tables, you’re debugging the wrong thing.
  • Check whether you’re looking at the root namespace. If you have containers or network namespaces, verify where the traffic lives.
  • Confirm which framework is in charge: nftables service, firewalld, Docker, an iptables restore unit, or a config management agent.

Second: find who last wrote the rules

  • Use systemd journals: look for nftables reloads, firewalld starts, docker restarts, and iptables-restore services.
  • Check timestamps on your ruleset files. If they’re correct but the live rules differ, someone overwrote them after load.

Third: validate packet path and hook/priority

  • Is it input or forward? Local traffic vs routed traffic confuses everyone once.
  • Is NAT happening earlier than you assume? Are you matching on original or translated addresses?
  • Are there multiple base chains at the same hook? Inspect priorities and policies.

Fourth: decide on a single source of truth

  • Either: plain nftables with a single ruleset file and systemd unit.
  • Or: firewalld as the manager.
  • Or: a higher-level orchestrator (Kubernetes CNI, etc.).

Mixing them is how you get “rules don’t work,” except the rules are working—just not yours.

Load order: systemd, network, and why timing matters

On Debian 13, the most common “it worked manually but not on boot” is a simple race: the nftables service loads before the environment it assumes exists.

What does “environment” mean here?

  • Interfaces: if your rules refer to iifname "ens192" but the interface is renamed by udev later, you may have a mismatch at boot.
  • Kernel modules: some match/conntrack/NAT behavior depends on modules being available. Usually autoloaded, sometimes not, especially with unusual protocols.
  • Sysctls: forwarding, rp_filter, bridge-nf-call-iptables, etc. These can change what traffic even reaches netfilter hooks.
  • Network managers: ifupdown, systemd-networkd, NetworkManager—each has its own timing and hook points.

Systemd ordering is explicit. If you don’t specify it, you get “best effort.” In production, best effort is how you grow gray hair.

Opinionated guidance

  • Keep nftables as early as possible for default-drop input policies, but don’t write interface-specific rules that depend on late renames.
  • If you must depend on interfaces, order nftables after the network is up (or at least after udev settled) and accept that early-boot traffic is less controlled.
  • Prefer matching on addresses/subnets over interface names when possible, especially on hosts with predictable addressing.

Conflicts: iptables, firewalld, Docker, and friends

Conflicts are not philosophical. They are literally multiple processes writing to the same netfilter ruleset, sometimes through different APIs, with different assumptions and reload behavior.

iptables vs nftables: the two-faced mirror

On Debian, iptables can operate in two modes:

  • iptables-nft: iptables commands program nftables rules behind the scenes.
  • iptables-legacy: iptables commands program the legacy xtables backend, separate from nftables.

If you don’t know which one is active, you can easily “fix” the firewall using the wrong tool and then wonder why nftables didn’t change. Or the reverse: nftables looks right, but iptables-legacy is the one actually filtering.

firewalld: the polite dictator

firewalld’s job is to maintain a desired state. If you edit nft rules directly on a firewalld-managed system, you’re writing graffiti on a whiteboard that gets erased on the next refresh.

Docker: helpful until it’s not

Docker typically ensures container connectivity by injecting NAT and filter rules. Depending on settings and versions, it may:

  • create its own chains and jump to them
  • change forwarding defaults
  • reapply rules on daemon restart (which can happen during upgrades, log rotations, or resource pressure)

If you run strict nftables policies on a Docker host, you must plan for Docker’s rule programming. The correct approach is usually to integrate with it (allow its chains but constrain them), not to pretend it doesn’t exist.

Joke #2: Docker and firewalls have a relationship status: “It’s complicated,” and your change window is the couples therapist.

Hooking, chain priority, and the illusion of “my rule is there”

nftables is not “one big list.” It’s tables, chains, hooks, and priorities. If you came from iptables muscle memory, the structure can feel like someone reorganized your toolbox into labeled drawers. That’s good—until you put the wrench in the “spoons” drawer and wonder why nothing tightens.

Three gotchas that look like broken rules

  • Wrong hook: you block in input but the traffic is forwarded through the host, so it hits forward instead.
  • Wrong family: you wrote table ip filter but the traffic is IPv6, so it’s untouched. table inet filter is often the pragmatic default.
  • Priority inversion: a base chain with a “better” (numerically lower) priority runs before yours and ACCEPTs traffic, so your later DROP never triggers.

Chain priority in practice

Priorities are integers. Lower runs earlier. NAT has conventional priorities (e.g., dstnat in prerouting, srcnat in postrouting), but you can still create competing base chains. If a vendor agent installs a base chain at the same hook with earlier priority, your careful policy can become decorative.

When diagnosing, don’t just read your own config file. Inspect the live ruleset and identify all base chains attached to each hook. Then reason about order.

One quote worth keeping on your monitor

paraphrased idea from John Allspaw: reliability is a property of the system, not of individual components behaving as designed.

Practical tasks: commands, outputs, decisions (the production checklist)

You wanted commands. Here are commands. Each task includes what the output means and what decision you make next.

Task 1: Confirm nftables service status and last action

cr0x@server:~$ systemctl status nftables --no-pager
● nftables.service - nftables
     Loaded: loaded (/lib/systemd/system/nftables.service; enabled; preset: enabled)
     Active: active (exited) since Sun 2025-12-28 09:11:08 UTC; 2h 3min ago
       Docs: man:nft(8)
    Process: 412 ExecStart=/usr/sbin/nft -f /etc/nftables.conf (code=exited, status=0/SUCCESS)
   Main PID: 412 (code=exited, status=0/SUCCESS)

Meaning: The service ran and exited successfully (normal for oneshot). If it’s inactive/failed, your rules may never load.

Decision: If failed, inspect logs and run a manual load with check mode (Task 4). If active, move on: something else may be overwriting rules.

Task 2: Inspect journal for reloads and competing services

cr0x@server:~$ journalctl -u nftables -u firewalld -u docker --since "today" --no-pager
Dec 28 09:11:08 server systemd[1]: Starting nftables...
Dec 28 09:11:08 server nft[412]: /etc/nftables.conf:52:16-16: Error: Could not process rule: No such file or directory
Dec 28 09:11:08 server systemd[1]: nftables.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 09:11:08 server systemd[1]: Failed to start nftables.
Dec 28 09:19:43 server systemd[1]: Starting Docker Application Container Engine...
Dec 28 09:19:44 server dockerd[733]: time="2025-12-28T09:19:44.101" level=info msg="Loading containers: done."

Meaning: Here nftables actually failed due to a config error (a missing include file is common). Docker started later and will program rules regardless.

Decision: Fix the nftables config parse error before chasing “conflicts.” If nft fails at boot, you’re fighting with an unloaded policy.

Task 3: Verify which iptables backend is selected

cr0x@server:~$ update-alternatives --display iptables
iptables - auto mode
  link best version is /usr/sbin/iptables-nft
  link currently points to /usr/sbin/iptables-nft
  link iptables is /usr/sbin/iptables
  slave iptables-restore is /usr/sbin/iptables-restore
  slave iptables-save is /usr/sbin/iptables-save
/usr/sbin/iptables-legacy - priority 10
/usr/sbin/iptables-nft - priority 20

Meaning: iptables commands will manipulate nftables. That’s good for a single unified ruleset, but it also means Docker/other iptables users are writing into nft.

Decision: If you see legacy selected on a host you manage with nftables, stop and align tooling. Mixed backends = debugging theatre.

Task 4: Validate your config without applying it

cr0x@server:~$ nft -c -f /etc/nftables.conf
/etc/nftables.conf:12:1-14: Warning: table ip filter is managed by iptables-nft, do not touch!

Meaning: Config parses, but you’re writing to a table that another tool (iptables-nft) also manages. Warning text varies, but the situation is real.

Decision: Don’t fight over the same table names. Use your own tables/chains and integrate with jumps if needed, or stop using iptables tooling on that host.

Task 5: Dump the live ruleset and look for your fingerprints

cr0x@server:~$ nft list ruleset
table inet filter {
	chain input {
		type filter hook input priority 0; policy drop;
		ct state established,related accept
		iifname "lo" accept
		tcp dport 22 accept
		counter drop
	}
}
table ip nat {
	chain PREROUTING {
		type nat hook prerouting priority -100; policy accept;
	}
	chain POSTROUTING {
		type nat hook postrouting priority 100; policy accept;
	}
}

Meaning: Your inet filter table exists, input policy is drop, SSH allowed. NAT table exists too.

Decision: If your rules are absent, you’re not loading the right file or you’re being overwritten. If they are present but traffic still flows incorrectly, move to hook/path validation and tracing.

Task 6: Identify all base chains and priorities at each hook

cr0x@server:~$ nft -a list chains
table inet filter {
	chain input { type filter hook input priority 0; policy drop; }
	chain forward { type filter hook forward priority 0; policy drop; }
}
table inet docker {
	chain input { type filter hook input priority -10; policy accept; }
}

Meaning: There are two base chains on hook input. Docker’s chain runs earlier (priority -10) and accepts by default. Your drop policy runs later and might never see packets.

Decision: Either change priorities, remove the competing base chain, or integrate by making Docker jump into your chain (or vice versa). Leaving two base chains with conflicting policies is asking for surprises.

Task 7: Confirm whether firewalld is managing nftables

cr0x@server:~$ systemctl is-enabled firewalld
enabled

Meaning: firewalld will likely assert its own state at boot and on reload, potentially overwriting parts of the ruleset.

Decision: Choose: firewalld-managed or nftables-managed. If firewalld stays, stop editing nft directly and use firewalld primitives. If nftables stays, disable firewalld.

Task 8: Check whether Docker is programming iptables rules

cr0x@server:~$ grep -nE 'iptables|ip6tables' /etc/docker/daemon.json
12:  "iptables": true,
13:  "ip6tables": true,

Meaning: Docker will program rules (via iptables interface), which lands in nft if iptables-nft is selected.

Decision: If you need tight control, consider setting "iptables": false only if you are prepared to manage container connectivity yourself. Most teams are not, and they discover that at 2 a.m.

Task 9: Confirm forwarding and bridge netfilter sysctls

cr0x@server:~$ sysctl net.ipv4.ip_forward net.ipv6.conf.all.forwarding net.bridge.bridge-nf-call-iptables
net.ipv4.ip_forward = 0
net.ipv6.conf.all.forwarding = 0
net.bridge.bridge-nf-call-iptables = 0

Meaning: Forwarding is off; bridged traffic may bypass iptables-style hooks. If you expected the host to route or filter bridged container traffic, your expectation is wrong.

Decision: If this host is a router or does container forwarding, enable appropriate sysctls and encode them persistently. If it shouldn’t forward, keep them off and adjust your design.

Task 10: Confirm you’re filtering the correct namespace

cr0x@server:~$ ip netns list
cni-3f2a9c7b-1

Meaning: There is at least one non-root netns. Traffic may be handled there, with its own rules.

Decision: If the problematic traffic is inside that namespace, inspect it with ip netns exec and apply rules there (or adjust the host policy path).

Task 11: Trace packet evaluation to see which rule matches

cr0x@server:~$ nft monitor trace
trace id 9b7e7f05 inet filter input packet: iif "ens192" ip saddr 203.0.113.50 ip daddr 192.0.2.10 tcp sport 51544 tcp dport 22
trace id 9b7e7f05 rule inet filter input handle 7: tcp dport 22 accept

Meaning: The packet hit your input chain and matched the SSH accept rule (handle 7). Tracing removes guesswork.

Decision: If the packet matches an unexpected accept/drop rule, fix ordering and predicates. If it never appears, you’re tracing the wrong hook/path or traffic is bypassing the host.

Task 12: Check for silent overwrites by periodic jobs or config management

cr0x@server:~$ systemctl list-timers --all --no-pager | grep -E 'nft|iptables|firewall'
Sun 2025-12-28 10:00:00 UTC  1h ago   Sun 2025-12-28 11:00:00 UTC  1min ago  firewall-sync.timer  firewall-sync.service

Meaning: A timer exists that likely re-applies firewall state. Your manual change will be reverted on schedule.

Decision: Either integrate your change into that managed pipeline or disable the competing job. “I’ll just edit it live” is not a strategy, it’s a prank on Future You.

Task 13: Confirm your include files exist and load order in config

cr0x@server:~$ sed -n '1,120p' /etc/nftables.conf
#!/usr/sbin/nft -f

flush ruleset

include "/etc/nftables.d/base.nft"
include "/etc/nftables.d/nat.nft"
include "/etc/nftables.d/docker-overrides.nft"

Meaning: You have a modular ruleset with a hard flush at the top. That’s sane as long as nothing else expects to keep its own rules.

Decision: If Docker/firewalld must coexist, don’t flush ruleset unconditionally. Instead, flush only your tables, or define a clear integration contract.

Task 14: Verify that a reload does not drop existing SSH sessions

cr0x@server:~$ sudo nft -f /etc/nftables.conf && echo "reload ok"
reload ok

Meaning: Ruleset loaded successfully. If your SSH stayed up, your established/related rule is doing its job.

Decision: If reload kills sessions, your stateful accept rule is missing or you flushed conntrack expectations. Fix before doing this remotely again.

Three corporate mini-stories (and what to steal from them)

Mini-story #1: The incident caused by a wrong assumption

One team inherited a Debian host that acted as a jump box and a lightweight router for a lab segment. They wanted to “tighten ingress” and wrote an nftables input chain with a default drop and a neat allowlist. Looked perfect in review. Tests from their workstation passed.

Two hours later, a different group reported that the lab segment couldn’t reach the artifact cache. The cache lived on the other side of the jump box. Nobody connected that to the firewall change because “we only touched input.”

The reality: almost all the relevant traffic was forwarded through the box, never hitting input. The default forward chain policy was still whatever Docker and historical iptables baggage left behind. In other words, they secured the wrong door.

The fix wasn’t heroic. They added a deliberate forward base chain in table inet filter with explicit allows between segments and a default drop, then validated with nft monitor trace from both sides. They also wrote down the packet path assumptions in the change request, which sounds boring until it prevents the next person from repeating the same mistake.

Mini-story #2: The optimization that backfired

A platform team decided to “standardize and speed up boot” by making nftables load extremely early, before network initialization, across a fleet. Their reasoning was reasonable: earlier firewall, smaller risk window. So they set aggressive ordering and removed network dependencies from the unit.

It worked in staging, mostly. In production, a subset of machines had predictable interface renames that completed later in boot. The rules referenced iifname matches for a management interface and a storage interface. Early load meant those names weren’t stable yet. Rules compiled, but they matched nothing. The hosts booted into a permissive reality they did not intend.

They caught it only because an engineer noticed that nft list ruleset showed the expected rules, yet traffic was clearly not filtered. Tracing revealed packets hitting the default accept in a different chain that was meant to be unreachable.

The backfire lesson: “load early” is not automatically “secure.” It’s secure only if what you match on is stable early. They ended up moving interface-dependent rules into sets keyed by subnets and using inet tables, and they ordered nftables after udev settle on that hardware class. Slightly later, actually correct.

Mini-story #3: The boring but correct practice that saved the day

At a different company, an SRE team had a dull habit: every firewall change included a before/after snapshot of nft list ruleset, plus a simple connectivity test script run from two vantage points. They stored these artifacts with the deployment record. Nobody bragged about it at parties.

During a routine OS upgrade on Debian hosts, a new package pulled in firewalld as a dependency for a “convenience” tool. It wasn’t malicious; it was just a default. firewalld enabled itself on boot for a small slice of the fleet due to a post-install behavior and an automation quirk.

Within an hour, someone noticed that the live ruleset didn’t match the expected snapshot after reboot. Not because traffic had already broken—because they had a habit of comparing state after maintenance. They disabled firewalld on those nodes, reasserted nftables, and added an explicit package pin plus a CI check that fails if firewalld is enabled on that role.

The saved-the-day lesson: reliable operations is mostly habits and guardrails. Fancy debugging is for when the guardrails fail.

Common mistakes: symptoms → root cause → fix

1) “Rules are present, but packets ignore them”

Symptom: nft list ruleset shows your drop rules, but connections still succeed.

Root cause: Your rule is in the wrong hook (input vs forward) or wrong family (ip vs inet), or a different base chain runs earlier and accepts.

Fix: Use nft -a list chains to enumerate base chains and priorities; use nft monitor trace to verify evaluation; adjust hook/priority and remove competing base chains.

2) “Works until reboot, then gone”

Symptom: After reboot, ruleset is empty or default.

Root cause: nftables service disabled, config file not referenced, parse error at boot, or another manager overwrote state after nftables loaded.

Fix: Ensure systemctl enable nftables; validate with nft -c -f; check journal for failures; audit firewalld/Docker/iptables restore units and timers.

3) “My include file loads manually but fails in service”

Symptom: Manual nft -f works; boot-time load fails.

Root cause: Relative include paths, missing file at boot, different working directory, or ordering issue where the file is generated later.

Fix: Use absolute paths in include; ensure the generating unit runs before nftables; add Requires= and After= dependencies if needed.

4) “Docker networking breaks when I enable default drop”

Symptom: Containers lose outbound access or published ports fail.

Root cause: Docker expects certain forward/NAT behavior; your rules drop forwarded traffic or block inter-bridge flows; Docker chains may run earlier than yours.

Fix: Explicitly allow established/related in forward; allow necessary bridge interfaces and Docker chains; avoid flushing the whole ruleset if Docker must manage its own pieces.

5) “iptables shows one thing, nft shows another”

Symptom: iptables -S doesn’t match nft list ruleset.

Root cause: iptables-legacy backend is active, or you’re mixing tools across namespaces.

Fix: Align alternatives to iptables-nft if you want unified state; stop using legacy tools; verify namespace context.

6) “A vendor agent keeps reverting my rules”

Symptom: Your rules apply, then revert at odd intervals.

Root cause: Periodic enforcement (systemd timer, config management, cloud security agent).

Fix: Locate the timer/service; modify the source-of-truth config; add monitoring on ruleset drift (hash the output of nft list ruleset).

Checklists / step-by-step plan

Step-by-step: make nftables the single source of truth (recommended for servers)

  1. Pick the manager. If you want plain nftables, disable firewalld and stop using iptables scripts as the canonical config.
  2. Align iptables backend. Use iptables-nft so Docker and any iptables consumers land in the nft world, unless you have a strong reason otherwise.
  3. Design the ruleset structure. Prefer table inet filter for input/forward/output. Keep NAT in table ip nat (and ip6 nat only if you actually do IPv6 NAT).
  4. Decide whether you can safely flush. If nothing else should manage rules, flush ruleset is clean. If Docker must manage, flush only your tables.
  5. Encode ordering. If you depend on sysctls or generated files, systemd must know it.
  6. Verify with trace and counters. Add counters to key rules and watch increments during tests.
  7. Make it persistent. Enable the nftables unit and keep config in a managed file path.
  8. Add drift detection. Alert if the live ruleset hash changes outside change windows.

Checklist: safe remote change procedure

  • Confirm you have console access or out-of-band (BMC, hypervisor console).
  • Ensure ct state established,related accept exists before you default-drop input.
  • Use nft -c -f before nft -f.
  • Apply changes during a session with a second connection as a canary.
  • Immediately run nft monitor trace while testing a representative flow.
  • Record nft list ruleset before/after for later blame-free debugging.

Checklist: coexistence rules (when you can’t avoid Docker/firewalld)

  • If firewalld is enabled, treat it as the source of truth and configure it directly.
  • If Docker is enabled and you keep "iptables": true, do not flush the entire ruleset on reload.
  • Explicitly inspect chain priorities to ensure your policy isn’t bypassed by earlier base chains.
  • Prefer integrating with Docker chains via jumps rather than rewriting them.

FAQ

Why do my rules work when I run nft -f manually but not after reboot?

Because boot is a race. Your service may fail due to an include path, load before files exist, or get overwritten after load by Docker/firewalld/timers. Check systemctl status nftables and the journal first.

Should I use table inet or separate table ip/table ip6?

Use inet for filter rules unless you have a specific reason not to. It reduces drift between v4/v6 policy and avoids “IPv6 accidentally open” incidents.

Is it safe to put flush ruleset at the top of /etc/nftables.conf?

Safe only if nftables is the sole manager. If Docker or firewalld programs rules, flushing everything will remove their chains and you’ll spend the afternoon explaining why containers can’t reach DNS.

How do I tell if Docker is the thing changing my firewall?

Look for Docker-created tables/chains in nft list ruleset and correlate with journalctl -u docker. Also check /etc/docker/daemon.json for "iptables": true.

Can I run firewalld and a hand-written nftables ruleset together?

You can, but you shouldn’t. firewalld will assert state; your hand-written changes become non-deterministic. Pick one manager per host role.

Why does iptables -S not match nft list ruleset?

You may be using iptables-legacy. Check update-alternatives --display iptables. Also verify network namespaces—iptables in a different namespace will show different state.

My SSH got cut during a reload. What did I do wrong?

You likely default-dropped input without allowing ct state established,related early enough, or you replaced the ruleset without preserving stateful acceptance. Fix the order: established/related first, then loopback, then allowed services, then drop.

How do I prove which rule drops a packet?

Use nft monitor trace during a test flow. It shows the chain and rule handle that matched. Counters also help, but trace is the clearest.

What’s the cleanest way to prevent rule drift?

Make one system the source of truth (nftables file, firewalld, or config management), then add monitoring that hashes nft list ruleset and alerts on unexpected changes.

Conclusion: next steps that prevent reoccurrence

If nftables “doesn’t work” on Debian 13, it’s almost never a kernel mystery. It’s control-plane ambiguity: load order, multiple managers, or chains that run in an order you didn’t intend.

Do these next:

  1. Run the fast diagnosis playbook and capture the live ruleset, chain priorities, and the journal timeline.
  2. Pick a single firewall manager for the host role. Disable the others or integrate explicitly.
  3. Remove accidental races: validate config, use absolute includes, and encode systemd ordering where dependencies exist.
  4. Use trace once per incident. It’s the fastest way to stop debating theories.
  5. Add drift detection so you find out about overwrites before users do.

The end-state you want is boring: deterministic boot behavior, one source of truth, and firewall changes that behave the same on Tuesday as they did during your test on Friday.

← Previous
Mining Editions: the Best and Worst Ideas the Industry Shipped
Next →
AMD and Ray Tracing: Why Catching Up Was the Smart Play

Leave a comment