You set up a clean nftables firewall on Debian 13. You test it. You feel good. Then you install Docker and—mysteriously—ports appear open, forwarding behaves differently, and packets start taking routes you didn’t authorize.
This is one of those operational annoyances that can become a security incident if you keep treating it like “just networking stuff.” Docker has opinions. nftables has opinions. Your compliance team has opinions. Only one of these pays your salary.
The mental model: who owns the firewall?
On Debian 13, nftables is the first-class firewall interface, but the kernel still exposes netfilter hooks that multiple tools can program. Docker, by default, programs those hooks too. Depending on packaging and configuration, it may do so through iptables-nft (iptables syntax, nft backend) while you’re writing native nft rules. Same hooks, different tooling, shared state. This is how you end up with a “working” firewall that behaves like an improv show.
The biggest mistake is thinking in terms of “my firewall rules file.” The system does not care about your file. The system cares about the active ruleset in the kernel. Docker modifies that active ruleset dynamically. If you want predictability, you must decide:
- Either let Docker manage NAT and basic forwarding, and you control it via the correct choke points (especially the
DOCKER-USERhook), - Or disable Docker’s firewall meddling and take full responsibility for NAT/forwarding/published ports yourself.
Half-and-half is where security teams go to cry.
One quote that’s been taped to more than one on-call laptop:
Hope is not a strategy.
— General Gordon R. Sullivan
Nine facts that will stop you from guessing
- nftables replaced iptables as the “successor” years ago, but iptables is still widely used as a compatibility interface, often mapping into nft behind the scenes.
- Docker historically used iptables directly to set up NAT and port publishing. That legacy still shows up today, even when nftables is your chosen interface.
- The
iptables-nftbackend means iptables commands can create nftables rules. That’s convenient, but it also means two different syntaxes are writing into one ruleset. - Docker’s “published ports” are not just listening sockets; they are typically DNAT rules plus filter changes. A container can be reachable even if nothing is bound on the host in the way you expect.
- The
DOCKER-USERchain exists specifically to let operators enforce policy before Docker’s own accept rules. If you’re not using it, you’re leaving money (and control) on the table. - NAT and filtering are different tables. Blocking in
filterwithout understandingnatcan lead to “it’s blocked but somehow still works” or “it’s open but not reachable” confusion. - Forwarding depends on
net.ipv4.ip_forwardand related sysctls; Docker may enable behaviors that your baseline hardening disabled, and vice versa. - Firewalls are evaluated in hook order. If Docker inserts rules earlier than yours (or at higher priority), your carefully crafted drops may never run.
- Debian defaults can be deceptively quiet: you can have “nftables installed” but “nftables service inactive,” and still have active rules inserted by other components.
Joke #1: Firewalls are like office politics—everything is fine until someone quietly changes the chain of command.
What actually happens on Debian 13 when Docker starts
1) Docker creates bridges and namespaces
Docker typically creates a Linux bridge (commonly docker0) and one or more user-defined bridges. Containers sit in network namespaces with veth pairs plugged into those bridges. This part is straightforward.
2) Docker programs netfilter to make it “just work”
“Just work” means:
- Containers can reach the internet through masquerading (SNAT) on egress.
- Published ports on the host are forwarded (DNAT) to container IPs.
- Forwarding between host interfaces and the container bridge is accepted.
The exact implementation depends on whether Docker uses iptables compatibility and whether the system is running legacy iptables or nft-backed iptables. On Debian 13, you should assume nft backend unless you explicitly forced legacy.
3) You see nftables chains you didn’t write
Typical artifacts include chains like DOCKER, DOCKER-USER, and sometimes rules in nat for DNAT/MASQUERADE. The exact names can vary; the pattern doesn’t. If you’re surprised by these, you’re not alone. But “surprised” is not an acceptable steady state in production.
4) The “surprise” is usually not Docker being evil
It’s Docker optimizing for developer experience, while you’re optimizing for predictable security boundaries. Those goals are not enemies, but they require explicit design. You don’t win by hoping Docker stops.
Fast diagnosis playbook (first / second / third)
When a port looks open unexpectedly, or container traffic bypasses your intended policy, don’t start rewriting rules. Start with visibility. Then decide who should own what.
First: confirm what’s active (nft ruleset + iptables view)
- Dump the active nft ruleset and search for Docker chains and jump points.
- Check iptables rules as seen through the compatibility layer (it may reveal how Docker inserted rules).
Second: trace the packet path for the specific symptom
- Is it inbound to a published port? That’s DNAT + filter/forward.
- Is it container egress? That’s masquerade + forward + sysctls.
- Is it container-to-container? That’s bridge filtering and the
FORWARDchain equivalents.
Third: decide the model and enforce it
- If Docker manages iptables/nft: lock policy in
DOCKER-USER, keep your nft rules compatible, avoid conflicting hooks. - If you manage everything: disable Docker’s iptables, implement your own nat/filter rules for bridges and published ports, and accept the operational overhead.
Most production teams should start with “Docker manages NAT but we own policy via DOCKER-USER” unless they have strict network segmentation requirements and the staff to maintain it.
Practical tasks (commands, outputs, decisions)
These are field tasks. Run them on the host. Read the output. Make a decision. Repeat until the system is boring.
Task 1 — Confirm Docker and nftables status
cr0x@server:~$ systemctl status docker --no-pager
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; preset: enabled)
Active: active (running) since Mon 2025-12-30 09:14:22 UTC; 2h 1min ago
Docs: https://docs.docker.com
Main PID: 1423 (dockerd)
Tasks: 22
Memory: 154.2M
CPU: 1min 12.553s
CGroup: /system.slice/docker.service
└─1423 /usr/bin/dockerd -H fd://
What it means: Docker is active and can be injecting rules right now.
Decision: Assume the firewall is not “static.” Proceed to inspect active rules, not config files.
cr0x@server:~$ systemctl status nftables --no-pager
○ nftables.service - nftables
Loaded: loaded (/lib/systemd/system/nftables.service; enabled; preset: enabled)
Active: inactive (dead)
What it means: nftables may not be loading your intended baseline at boot, but the kernel can still have rules inserted by other components.
Decision: If you expect a baseline firewall, fix service state later. First, inspect the live ruleset.
Task 2 — Dump the active nftables ruleset (the truth)
cr0x@server:~$ sudo nft list ruleset | sed -n '1,120p'
table inet filter {
chain input {
type filter hook input priority filter; policy drop;
ct state established,related accept
iif "lo" accept
tcp dport 22 accept
}
chain forward {
type filter hook forward priority filter; policy drop;
jump DOCKER-USER
jump DOCKER-FORWARD
}
chain output {
type filter hook output priority filter; policy accept;
}
chain DOCKER-USER {
return
}
chain DOCKER-FORWARD {
iif "docker0" accept
oif "docker0" accept
}
}
What it means: Docker-related chains exist inside your inet filter table, and forward jumps to them. Even if you didn’t create them.
Decision: If you want policy control while keeping Docker working, DOCKER-USER is your enforcement point.
Task 3 — Check nft nat table for DNAT/MASQUERADE
cr0x@server:~$ sudo nft list table ip nat
table ip nat {
chain PREROUTING {
type nat hook prerouting priority dstnat; policy accept;
iifname != "docker0" tcp dport 8080 dnat to 172.18.0.5:80
}
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
oifname != "docker0" ip saddr 172.18.0.0/16 masquerade
}
}
What it means: Port 8080 on the host is being DNAT’d to a container, and container egress is masqueraded.
Decision: If you see DNAT rules you didn’t authorize, find which containers published ports and decide whether publishing should be allowed at all.
Task 4 — Identify published ports from Docker’s perspective
cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Ports}}'
NAMES IMAGE PORTS
web-01 nginx:alpine 0.0.0.0:8080->80/tcp
redis-01 redis:7 6379/tcp
What it means: web-01 is published to all host interfaces on port 8080. Redis is not published (container-only).
Decision: If “0.0.0.0” is not acceptable, constrain to 127.0.0.1:8080:80 or a specific interface IP, or block in DOCKER-USER.
Task 5 — Confirm which iptables backend is active
cr0x@server:~$ sudo update-alternatives --display iptables | sed -n '1,40p'
iptables - auto mode
link best version is /usr/sbin/iptables-nft
link currently points to /usr/sbin/iptables-nft
link iptables is /usr/sbin/iptables
/usr/sbin/iptables-nft - priority 20
/usr/sbin/iptables-legacy - priority 10
What it means: iptables commands map into nft rules (nft backend). Docker’s iptables programming will land in nftables.
Decision: Do not mix “iptables-legacy” with native nft rules unless you enjoy debugging parallel universes.
Task 6 — View iptables rules as Docker sees them (nft-backed)
cr0x@server:~$ sudo iptables -S | sed -n '1,80p'
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-USER
-N DOCKER-ISOLATION-STAGE-1
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A DOCKER-USER -j RETURN
What it means: Docker inserted its standard plumbing. The presence of DOCKER-USER is your chance to enforce policy.
Decision: If you will keep Docker managing rules, put your drops/accepts in DOCKER-USER (or the equivalent nft chain if you’re fully native).
Task 7 — Prove the host is actually listening vs being DNAT’d
cr0x@server:~$ sudo ss -lntp | awk 'NR==1 || $4 ~ /:8080$/'
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 0.0.0.0:8080 0.0.0.0:* users:(("docker-proxy",pid=2331,fd=4))
What it means: A process (often docker-proxy, depending on settings) is bound to port 8080; in other setups, you might see no listener because pure DNAT is used.
Decision: If you expected “no host listener means closed,” correct that assumption. You must inspect nat/filter rules too.
Task 8 — Inspect sysctls that control forwarding and bridge filtering
cr0x@server:~$ sysctl net.ipv4.ip_forward net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
What it means: Forwarding is enabled and bridged traffic hits netfilter hooks. That’s typical for Docker, but not always desired in tightly controlled environments.
Decision: If container traffic should never route between networks, consider disabling forwarding globally (but understand that Docker networking will change) or isolate via rules.
Task 9 — Find which interface actually receives the “surprise” traffic
cr0x@server:~$ ip -brief addr
lo UNKNOWN 127.0.0.1/8 ::1/128
ens3 UP 203.0.113.10/24 2001:db8:10::10/64
docker0 DOWN 172.17.0.1/16
br-2a1d3c4e5f6a UP 172.18.0.1/16
What it means: Your public interface is ens3; Docker networks exist on br-....
Decision: For inbound policy, always reason from the ingress interface (often ens3) through nat PREROUTING and then forward/filter.
Task 10 — Confirm which container owns the DNAT target
cr0x@server:~$ docker inspect -f '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' web-01
/web-01 172.18.0.5
What it means: The DNAT target IP matches web-01.
Decision: If that container should not be internet-facing, fix the publish configuration and/or block in DOCKER-USER.
Task 11 — Add an explicit policy gate in DOCKER-USER (block inbound except approved)
cr0x@server:~$ sudo iptables -I DOCKER-USER 1 -i ens3 -p tcp --dport 8080 -s 198.51.100.0/24 -j ACCEPT
cr0x@server:~$ sudo iptables -I DOCKER-USER 2 -i ens3 -p tcp --dport 8080 -j DROP
cr0x@server:~$ sudo iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -i ens3 -p tcp -s 198.51.100.0/24 --dport 8080 -j ACCEPT
-A DOCKER-USER -i ens3 -p tcp --dport 8080 -j DROP
-A DOCKER-USER -j RETURN
What it means: Inbound to published port 8080 from the internet interface is now restricted to an allowed CIDR, otherwise dropped, before Docker’s own accept rules.
Decision: If this fixes your policy mismatch without breaking container networking, you’ve validated the “Docker manages plumbing, we enforce policy” model.
Task 12 — Make the rules persistent (and stop relying on hand edits)
cr0x@server:~$ sudo install -d -m 0755 /etc/iptables
cr0x@server:~$ sudo sh -c 'iptables-save > /etc/iptables/rules.v4'
cr0x@server:~$ sudo head -n 20 /etc/iptables/rules.v4
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER -i ens3 -p tcp -s 198.51.100.0/24 --dport 8080 -j ACCEPT
-A DOCKER-USER -i ens3 -p tcp --dport 8080 -j DROP
-A DOCKER-USER -j RETURN
COMMIT
What it means: You now have a durable record of the enforced policy rules. Whether you restore with a systemd unit or another mechanism is your choice; the point is that “it worked on Tuesday” is not a configuration strategy.
Decision: If your org standard is native nftables, translate these to nft syntax and load via nftables service. If your org standard is iptables-nft, keep it consistent and automate restore at boot.
Task 13 — If you want Docker to stop injecting rules entirely (high-ceremony mode)
cr0x@server:~$ sudo install -d -m 0755 /etc/docker
cr0x@server:~$ sudo tee /etc/docker/daemon.json > /dev/null <<'EOF'
{
"iptables": false,
"ip-forward": false
}
EOF
cr0x@server:~$ sudo systemctl restart docker
cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Ports}}'
NAMES PORTS
web-01 0.0.0.0:8080->80/tcp
What it means: Docker still reports published ports, but it will no longer install the netfilter rules to make them reachable. Things will break until you supply your own NAT/forward/filter rules and sysctls.
Decision: Only choose this mode if you’re prepared to implement and own NAT and forwarding correctly for every Docker network you create.
Task 14 — Verify a specific packet path with counters (native nft visibility)
cr0x@server:~$ sudo nft -a list chain inet filter forward
table inet filter {
chain forward { # handle 8
type filter hook forward priority filter; policy drop;
jump DOCKER-USER # handle 21
jump DOCKER-FORWARD # handle 22
}
}
What it means: You can reference rule handles, add counters, and observe hits. This is how you stop arguing about “should” and start measuring “is.”
Decision: If you cannot explain which chain handles your traffic, do not change policy yet—instrument first.
Joke #2: If you ever feel useless, remember there’s a firewall change request marked “urgent” with no source IP and no port.
Two sane designs (and one that looks sane)
Design A (recommended for most teams): Docker manages plumbing, you enforce policy in DOCKER-USER
You let Docker create DNAT/MASQUERADE and keep containers reachable as designed. Then you add explicit allow/deny rules in DOCKER-USER based on:
- Ingress interface (public vs private)
- Destination port (published services)
- Source CIDR (admin networks, VPN ranges, partner ranges)
Why this works: Docker’s rule generation is complex and dynamic (containers start/stop, networks appear/disappear). Your policy is comparatively stable. Put the stable part where it will always be evaluated early.
What to avoid: sprinkling drop rules in random chains and hoping they beat Docker’s accept rules. That’s not engineering; that’s vibes.
Design B (for strict environments): disable Docker firewall programming and manage everything in nftables
If you have regulatory constraints or you’re building a platform where Docker is “just another workload,” you can disable Docker’s iptables behavior. Then:
- You enable forwarding selectively.
- You create nft nat rules for each Docker bridge subnet.
- You explicitly allow forwarding for published ports and desired egress.
This is viable, but it is work. You need tests, automation, and someone on-call who can reason about packet flow at 03:00.
Design C (looks sane but isn’t): mix native nftables rules with ad-hoc iptables fixes
The failure mode: you “fix” an issue with an iptables command in a hurry, then later “fix” another issue in nft syntax, then Docker updates and rewrites pieces, and now the ruleset is a layered cake of regret.
Pick one control plane for human-written policy: either native nft with a controlled Docker configuration, or iptables-nft with controlled chain usage. Consistency beats cleverness.
Three corporate mini-stories (anonymized, plausible, technically accurate)
Incident: the wrong assumption (“We blocked it in INPUT, so it can’t be reached”)
A mid-sized SaaS company moved a set of internal tools onto a Debian-based VM fleet. Security baseline: nftables with a default-drop INPUT policy, SSH only from corporate VPN, and everything else closed. The rollout looked clean.
Then a developer deployed a containerized admin UI and published it with -p 8443:443 so they could “just test it.” No one noticed because there was no host service listening on 8443 in the usual way, and the INPUT chain stayed pristine: default drop, no 8443 allow. Everyone relaxed.
A week later, an external scan flagged 8443 as open. The on-call engineer did the normal thing: checked ss -lntp, saw a docker-related listener, shrugged, and added a drop in INPUT. Scan still showed open. Now the mood shifted from “normal” to “we’re being haunted.”
Root cause: traffic was being DNAT’d and forwarded. The INPUT chain wasn’t the only gate; the FORWARD path plus Docker-installed rules were effectively accepting the forwarded traffic. The engineer’s drop rule was in the wrong hook for the packet path.
Fix: enforce policy in DOCKER-USER (drop inbound to published ports except from VPN ranges) and change the container publish to bind only to the VPN interface. Postmortem action item: stop treating INPUT as the only firewall boundary on container hosts.
Optimization that backfired: “Disable docker-proxy and rely on pure nft”
A platform team wanted performance wins and cleaner observability. They disabled docker-proxy (a common tweak) and standardized on nftables. They expected fewer processes, fewer moving parts, and simpler port exposure.
In staging, everything looked faster and “more kernel-native.” Then production hit a peculiar bug: certain published ports were reachable from some networks but not others, and health checks started flapping only for IPv6 clients. Engineers spent days arguing whether it was load balancer behavior, conntrack limits, or a broken kernel update.
The real issue was policy drift: the team’s nft rules assumed that listening sockets would represent exposure, but with proxy disabled the exposure was mostly DNAT behavior. Their monitoring checked ss for listeners and missed rules that opened paths. Their IPv6 policy was incomplete: NAT and filter behavior were inconsistent between ip and inet tables.
Fix: treat port publishing as a firewall feature, not a process feature. Monitor nftables ruleset deltas (or at least chain counts and selected rules), enforce policy via DOCKER-USER, and explicitly design IPv6 behavior (either support it end-to-end or disable it intentionally).
The optimization wasn’t “wrong.” It was un-owned complexity. That’s how “performance improvements” become “availability incidents.”
Boring but correct practice that saved the day: rule dumps in incident tickets
A large enterprise with a conservative change process had a simple rule: every firewall-related incident ticket must include attachments of nft list ruleset, iptables -S, and a docker ps port listing taken at the time of impact. Engineers grumbled because it felt bureaucratic.
Then a vendor appliance integration started failing intermittently. Traffic from a partner IP range would sometimes reach a containerized endpoint and sometimes hit a black hole. The application team blamed the partner. The partner blamed the enterprise. Everyone was about to schedule a meeting (the traditional way to solve packet loss).
The on-call followed the boring rule and attached the dumps. A reviewer noticed that the active ruleset changed after a container redeploy: a new user-defined bridge appeared, and Docker inserted corresponding masquerade rules. The enterprise’s custom nft drop rules were bound to interface names that changed with the bridge. The policy was correct in intent but brittle in implementation.
Because they had “before/after” dumps, they didn’t need guesses. They rewrote the policy to match on address ranges and use DOCKER-USER for ingress restriction rather than pinning to ephemeral bridge names. It was fixed the same day without dragging ten people into a calendar event.
Boring practice. Correct outcome. That’s the job.
Common mistakes: symptoms → root cause → fix
1) “Port is open even though INPUT is drop”
Symptoms: External scan shows a published port reachable; your nft input chain has no allow rule for it.
Root cause: Traffic is DNAT’d in PREROUTING and traverses FORWARD, not INPUT. Docker’s forward accept rules (or your permissive forward) allow it.
Fix: Enforce ingress restrictions in DOCKER-USER (preferred) or in the forward hook before Docker accepts. Verify nat PREROUTING rules for the port.
2) “I blocked a container, but it still reaches the internet”
Symptoms: You add drops in a host input chain; container egress still works.
Root cause: Container egress is forwarded traffic; INPUT is irrelevant. Also, egress is often masqueraded, hiding the container IP unless you match on interfaces/subnets.
Fix: Block in forward path based on source subnet (Docker bridge ranges) or container IP, or match on iifname for docker bridges. Prefer controlled networks per application.
3) “After enabling nftables service, Docker networking broke”
Symptoms: Containers can’t reach out; published ports stop responding after reboot.
Root cause: nftables service loads a baseline that flushes or overrides chains Docker expects, or sets forward policy drop without providing the Docker jump points.
Fix: Decide the ownership model. If Docker manages rules, do not flush tables Docker relies on; incorporate the needed jumps or keep policy in DOCKER-USER. If you own everything, disable Docker iptables and implement equivalent nat/forward rules.
4) “Rules look right in nft, but iptables shows something else”
Symptoms: nft dump doesn’t match what an iptables-based tool claims, or vice versa.
Root cause: Mixing legacy iptables with nft backend, or having both active in confusing ways, or assuming one tool shows everything.
Fix: Standardize on iptables-nft if you must use iptables syntax. Avoid legacy. Always verify with nft list ruleset because that’s the kernel-visible state.
5) “IPv6 behaves differently (or mysteriously bypasses restrictions)”
Symptoms: IPv4 restrictions work; IPv6 clients can still connect, or the opposite.
Root cause: Rules only applied to ip (v4) tables, not inet; or Docker/host has IPv6 enabled with different chains and policies.
Fix: Use table inet for filter rules when you want parity, and explicitly decide whether Docker IPv6 should be enabled. Test both stacks, don’t assume.
6) “A reboot changed everything”
Symptoms: Firewall behavior differs after restart; ports open/close unexpectedly.
Root cause: No deterministic boot ordering: nftables loads after Docker or vice versa, and one flushes/overrides the other; or rules were added manually and never persisted.
Fix: Make boot order explicit with systemd dependencies, persist rules properly, and include verification checks in configuration management.
Checklists / step-by-step plan
Checklist A — You want Docker to keep working, but you want predictable security
- Pick a single policy control plane: choose either native nftables with stable chains, or iptables-nft for policy insertion. Don’t improvise.
- Confirm backend: ensure
iptablespoints toiptables-nftand not legacy. - Inventory exposure: list published ports via
docker psand match them to nft nat rules. - Enforce ingress policy in DOCKER-USER: allow only the sources you intend; drop the rest.
- Handle IPv6 deliberately: mirror policy using inet tables, or disable IPv6 for Docker if your environment can’t support it safely.
- Persist changes: store rule sources in configuration management and ensure they apply at boot.
- Verify with counters: add counters to key rules and validate hits during tests.
- Write an operator note: “Published ports are controlled by DOCKER-USER; do not open ports in INPUT expecting it to work.”
Checklist B — You want Docker to stop injecting rules (full ownership)
- Set Docker daemon.json: disable
iptablesand decide onip-forward. - Define address plan: pin Docker bridge subnets so rules don’t chase randomness.
- Write nft nat rules: MASQUERADE for egress, DNAT for each published port you allow.
- Write nft filter rules: forward rules for established flows, allow only necessary ingress to containers.
- Test restart behavior: restart Docker and reboot the host; ensure rules remain consistent.
- Update runbooks: “Publishing a port in Docker does nothing unless firewall rules are added.”
- Automate verification: CI or a boot-time check that asserts key chains exist and policies are correct.
Checklist C — Minimal policy that avoids surprises (a good starting baseline)
- Default drop on host INPUT.
- Default drop on FORWARD unless you explicitly allow Docker forwarding.
- Explicit
ct state established,related acceptearly. - Explicit policy in
DOCKER-USERfor inbound published ports: allow only from trusted CIDRs. - Constrain published ports to specific interfaces when possible (bind to VPN IP, not 0.0.0.0).
FAQ
1) Why does Docker touch my firewall at all?
Because containers need NAT and forwarding to be useful by default, and published ports require DNAT rules. Docker optimizes for “install and run,” not “your compliance model.” If you want different behavior, you must configure it.
2) I use nftables. Why do I see iptables chains like DOCKER-USER?
On Debian 13, iptables is typically using the nft backend (iptables-nft). Docker programs iptables semantics, which become nftables objects under the hood. Different syntax, same kernel ruleset.
3) Is DOCKER-USER the right place to put my security rules?
If you let Docker manage iptables/nft plumbing, yes. Docker jumps to DOCKER-USER early in forwarding, specifically so operators can enforce policy before Docker’s accept rules. Use it.
4) Why didn’t blocking in the nft input chain stop access to a published container port?
Because the packet might not traverse INPUT. With DNAT, the kernel can treat inbound traffic as forwarded traffic to another interface (the docker bridge), so FORWARD is the gate.
5) Should I disable Docker’s iptables management?
Only if you have a strong reason and the operational maturity to own NAT/forwarding end-to-end. Otherwise, keep Docker’s plumbing and enforce policy in the intended choke point.
6) How do I prevent developers from accidentally publishing services to the internet?
Combine controls: enforce a default drop in DOCKER-USER for ingress from the public interface, and require publishing only on loopback or a private/VPN interface. Add CI checks for Compose files and runtime monitoring of published ports.
7) What about rootless Docker?
Rootless changes the networking model and often reduces direct netfilter manipulation, but it introduces other components (slirp4netns, user namespaces) and different performance/feature tradeoffs. It can help with “Docker shouldn’t rewrite firewall rules,” but it’s not a universal drop-in replacement for production workloads.
8) Why do rules disappear after reboot?
Usually boot ordering and persistence. Docker may add rules at startup; nftables service may later flush or replace them; or you added rules manually and never saved them. Fix by making the ownership model explicit and ensuring the correct service loads the correct rules at the correct time.
9) Can I manage everything in a single nftables ruleset and still use Docker normally?
Yes, but you must align with how Docker expects chains and hooks to exist, or disable Docker rule management and reproduce required nat/forward behavior yourself. The “single ruleset” goal is fine; the “surprises stop automatically” belief is not.
Conclusion: next steps you can do today
If Docker surprised your nftables firewall on Debian 13, the fix isn’t heroics. It’s clarity. Decide who owns the netfilter programming, then enforce policy in the right hook with the right tooling.
- Dump the live ruleset (
nft list ruleset) and identify Docker chains and jump points. - Inventory published ports (
docker ps) and match them to nat DNAT rules. - Implement a policy gate in
DOCKER-USERto constrain inbound access to published ports. - Make it persistent and test reboots so “worked once” becomes “works always.”
- Document the packet path for your team: INPUT isn’t the only door.
Your firewall should be boring. Docker won’t make it boring for you. That’s your job.