Docker + UFW: Why your ports are open anyway — lock it down correctly

Was this helpful?

You enabled UFW. You set “deny incoming.” You even felt a small surge of righteousness.
Then a quick scan shows your container’s port is still reachable from the internet. Great.

This isn’t UFW being “bad” and it isn’t Docker being “insecure by default” in some cartoonish way.
It’s a predictable interaction between Docker’s iptables automation and how UFW stages rules.
If you run production systems, you need to understand the packet path, then enforce policy in the one place Docker can’t “helpfully” sidestep.

The mental model: where your packets actually go

When you publish a port with Docker (-p 8080:80), Docker doesn’t politely ask UFW for permission.
It programs the kernel’s packet filter (iptables/nftables) to do DNAT and accept traffic, because “make it work” beats “wait for humans” in the default design.

UFW, meanwhile, is a rule manager. It writes a curated set of chains and jump rules into iptables (or nftables on some systems),
and it does it in a specific order.
Order is everything. The first matching rule wins.

Here’s the core issue: Docker inserts rules into the nat and filter tables that can accept forwarded traffic to containers
before UFW’s “deny incoming” posture ever gets a say.
The traffic isn’t “incoming to the host” in the way you think; it’s forwarded through the host to a container.

Packet path, simplified but accurate

When a packet arrives on your public interface destined for a published port:

  • PREROUTING (nat): Docker DNATs the destination to the container’s IP/port.
  • FORWARD (filter): The kernel forwards the packet from the host interface into the docker bridge.
  • DOCKER / DOCKER-USER (filter): Docker’s chains decide what gets through.
  • Container: The service receives the packet.

UFW’s common “deny incoming” rules mostly live around the INPUT chain.
But forwarded packets hit FORWARD, not INPUT.
So you can lock down the front door while leaving the side gate wide open.

The fix isn’t mystical. You enforce your policy in the path Docker uses: the DOCKER-USER chain.
That chain exists specifically so you can apply your own rules before Docker’s acceptance logic.

Why UFW “loses” to Docker (and why that’s not a bug)

UFW is opinionated: it assumes the host is the endpoint.
Docker is opinionated: it assumes the host is a router for container networks.
Put them together and you get a networking custody battle.

Docker’s port publishing is implemented with iptables rules that:

  • DNAT traffic in nat/PREROUTING and nat/OUTPUT for local connections.
  • Allow forwarding in filter/FORWARD toward docker0 (or a user-defined bridge).
  • Maintain its own chains (like DOCKER) and insert jumps early enough to matter.

UFW can control forwarding, but many installations leave forwarding permissive or don’t wire UFW’s forward rules to preempt Docker’s.
And if your mental model is “deny incoming means nothing reaches my box,” you’ll miss the distinction between INPUT and FORWARD.

One quote that’s stuck with ops people for decades—call it a paraphrased idea from Gene Kranz (NASA Flight Director):
paraphrased idea: “Tough and competent” beats clever when things go wrong.
Firewalls are not the place for clever. They’re where you want boring and correct.

Joke #1: A firewall rule “temporarily” added during an incident has the same half-life as radioactive waste—someone else will inherit it.

Interesting facts and short history you can use at 3 a.m.

  1. iptables is ordered evaluation: rules are checked top-down; first match wins. “I added a deny” means nothing if it’s below an accept.
  2. Docker popularized host-as-router: early Docker defaulted to bridge networking and NAT, effectively turning every host into a little edge router.
  3. UFW is a front-end: it doesn’t “run alongside” iptables; it writes iptables rules and manages chains. If something else edits iptables, UFW isn’t psychic.
  4. The DOCKER-USER chain exists for a reason: it was added so admins could enforce policies ahead of Docker-managed rules without constantly fighting Docker.
  5. FORWARD is often overlooked: many teams harden INPUT but forget that forwarding policy matters once containers and bridges enter the picture.
  6. conntrack is stateful glue: “ESTABLISHED,RELATED” accepts can make a port appear “open” for existing flows even after you “closed” it.
  7. Publishing binds to 0.0.0.0 by default: unless you specify an IP, Docker will expose on all host interfaces. That includes public ones.
  8. nftables migration is uneven: modern distros may default to nftables, but Docker and UFW interactions can still be mediated through iptables-compat layers.

Fast diagnosis playbook

When someone says “UFW is on, but the port is still open,” don’t debate philosophy. Run the playbook.
The goal is to find where the accept happens: INPUT, FORWARD, or Docker DNAT.

First: confirm what’s actually exposed

  • Check Docker published ports and their bind addresses.
  • Confirm the listening sockets on the host.
  • Test from an external vantage point (not from the same host).

Second: map the packet path

  • Inspect iptables/nftables rules: especially nat PREROUTING and filter FORWARD.
  • Find Docker chains and their jump order.
  • Check the DOCKER-USER chain—does it exist and does it do anything?

Third: decide on the right containment strategy

  • If the service should be internal-only: bind published ports to 127.0.0.1 or a private interface.
  • If it should be public but restricted: enforce in DOCKER-USER by source IP, interface, or destination port.
  • If you need true perimeter policy: prefer a dedicated firewall in front (cloud SG/NACL, hardware, or a host firewall with strict DOCKER-USER policy).

Practical tasks: commands, outputs, and decisions (12+)

These are runnable on a typical Ubuntu host with Docker and UFW. Adjust interface names as needed.
Each task includes: command, what the output means, and what decision you make.

Task 1: Confirm UFW status and default policy

cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    203.0.113.0/24

Meaning: “disabled (routed)” is the red flag: forwarded traffic isn’t being governed by UFW.
Decision: If you rely on UFW to block Docker exposure, you must address routed/forwarded traffic (or enforce in DOCKER-USER).

Task 2: List Docker’s published ports with bind addresses

cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Ports}}'
NAMES          IMAGE           PORTS
web            nginx:alpine    0.0.0.0:8080->80/tcp
metrics        prom/prometheus 127.0.0.1:9090->9090/tcp

Meaning: web is exposed on all interfaces; metrics is localhost-only and won’t be reachable externally.
Decision: If a service shouldn’t be public, rebinding to 127.0.0.1 is the simplest win.

Task 3: Verify what’s listening on the host (sockets)

cr0x@server:~$ sudo ss -lntp | awk 'NR==1 || /:8080|:9090|:22/'
State  Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0      4096   0.0.0.0:8080      0.0.0.0:*       users:(("docker-proxy",pid=1542,fd=4))
LISTEN 0      4096   127.0.0.1:9090    0.0.0.0:*       users:(("docker-proxy",pid=1611,fd=4))
LISTEN 0      4096   0.0.0.0:22        0.0.0.0:*       users:(("sshd",pid=912,fd=3))

Meaning: Docker (or docker-proxy) is listening on 0.0.0.0:8080, which is genuinely exposed unless blocked in the packet filter.
Decision: Don’t assume “it’s in a container so it’s isolated.” Treat it like any other listener.

Task 4: Confirm the host’s public interfaces and addresses

cr0x@server:~$ ip -br addr
lo               UNKNOWN        127.0.0.1/8 ::1/128
ens3             UP             198.51.100.10/24 fe80::5054:ff:fe12:3456/64
docker0          DOWN           172.17.0.1/16

Meaning: Anything bound to 0.0.0.0 is reachable via ens3 unless filtered.
Decision: Decide whether exposure is intended on that interface; if not, bind explicitly or block on that interface.

Task 5: Inspect Docker’s iptables NAT rules (where DNAT happens)

cr0x@server:~$ sudo iptables -t nat -S | sed -n '1,120p'
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:80
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 9090 -j DNAT --to-destination 172.17.0.3:9090

Meaning: Port 8080 traffic is being DNAT’d to a container. UFW INPUT rules don’t stop DNAT.
Decision: You must control forwarding acceptance (filter/FORWARD) or apply policy in DOCKER-USER.

Task 6: Inspect the filter table FORWARD chain (the usual “gotcha”)

cr0x@server:~$ sudo iptables -S FORWARD
-P FORWARD DROP
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT

Meaning: Even with FORWARD policy DROP, Docker installs accepts. Crucially, it jumps to DOCKER-USER first.
Decision: Put your deny/allow rules in DOCKER-USER to preempt Docker’s accepts.

Task 7: Check what’s in DOCKER-USER right now

cr0x@server:~$ sudo iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -j RETURN

Meaning: No policy. Everything hits RETURN and then Docker’s permissive forwarding rules apply.
Decision: Add explicit policy here. Empty DOCKER-USER is “trust me bro” networking.

Task 8: Add a default deny for forwarded container traffic, then allow what you mean

cr0x@server:~$ sudo iptables -I DOCKER-USER 1 -i ens3 -o docker0 -j DROP
cr0x@server:~$ sudo iptables -I DOCKER-USER 1 -i ens3 -o docker0 -p tcp --dport 8080 -s 203.0.113.0/24 -j ACCEPT
cr0x@server:~$ sudo iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -i ens3 -o docker0 -p tcp -m tcp --dport 8080 -s 203.0.113.0/24 -j ACCEPT
-A DOCKER-USER -i ens3 -o docker0 -j DROP
-A DOCKER-USER -j RETURN

Meaning: Public interface to docker0 is dropped except for TCP/8080 from a trusted subnet.
Decision: This is the “make exposure explicit” posture. Add allows per port/service, then keep the drop.

Task 9: Confirm counters are moving where you expect

cr0x@server:~$ sudo iptables -L DOCKER-USER -v -n
Chain DOCKER-USER (1 references)
 pkts bytes target  prot opt in   out     source           destination
   12   720 ACCEPT  tcp  --  ens3 docker0 203.0.113.0/24  0.0.0.0/0            tcp dpt:8080
  305 18300 DROP    all  --  ens3 docker0 0.0.0.0/0        0.0.0.0/0
    0     0 RETURN  all  --  *    *       0.0.0.0/0        0.0.0.0/0

Meaning: You’re actively dropping attempted access, and allowing the intended sources.
Decision: If counters on DROP spike unexpectedly, you probably exposed a port you didn’t realize was in use.

Task 10: Make the rules persistent (or they will vanish on reboot)

cr0x@server:~$ sudo apt-get update
cr0x@server:~$ sudo apt-get install -y iptables-persistent
cr0x@server:~$ sudo netfilter-persistent save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save

Meaning: Your current iptables rules are saved and will be restored on boot.
Decision: If you manage infrastructure with config management, codify these rules there instead of relying on pet-server state.

Task 11: Bind a published port to localhost (often the best fix)

cr0x@server:~$ docker run -d --name internal-admin -p 127.0.0.1:8081:80 nginx:alpine
cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Ports}}' | grep internal-admin
internal-admin  127.0.0.1:8081->80/tcp

Meaning: The service is only reachable from the host itself (or via SSH tunnel/reverse proxy).
Decision: Use this for dashboards, admin panels, and anything you access via bastion anyway.

Task 12: Verify UFW’s routed policy if you insist on UFW managing forwarding

cr0x@server:~$ sudo ufw status verbose | grep -i routed
Default: deny (incoming), allow (outgoing), disabled (routed)

Meaning: Routed traffic policy is disabled; UFW isn’t governing forward.
Decision: Either enable routed policy and integrate carefully, or stop pretending UFW alone controls Docker exposure and use DOCKER-USER.

Task 13: Check UFW’s forwarding setting in sysctl (the kernel can veto you)

cr0x@server:~$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

Meaning: IP forwarding is enabled (common on Docker hosts).
Decision: If you disable forwarding to “fix” exposure, expect container networking to break. That’s a blunt instrument, not a plan.

Task 14: See the full rule ordering around Docker and UFW

cr0x@server:~$ sudo iptables -S | sed -n '1,120p'
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-USER
-N ufw-before-input
-N ufw-user-input
-A INPUT -j ufw-before-input
-A INPUT -j ufw-user-input
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER
-A DOCKER-USER -j RETURN

Meaning: Docker hooks FORWARD early. UFW mainly hooks INPUT. That’s the whole story in 10 lines.
Decision: Stop expecting INPUT policy to govern forwarded traffic.

Task 15: Quick external check from another machine (reality test)

cr0x@server:~$ nc -vz 198.51.100.10 8080
Connection to 198.51.100.10 8080 port [tcp/http-alt] succeeded!

Meaning: The port is reachable externally.
Decision: If that wasn’t intended, fix bind address or DOCKER-USER rules, then retest until this fails from untrusted sources.

Task 16: Confirm Docker’s “iptables management” setting (and why you usually shouldn’t disable it)

cr0x@server:~$ sudo cat /etc/docker/daemon.json
{
  "log-driver": "journald",
  "iptables": true
}

Meaning: Docker is managing iptables, which is the normal default.
Decision: Don’t flip this to false unless you’re ready to fully own NAT/forwarding rules yourself (and debug weird breakage later).

Lockdown patterns that actually hold up

Pattern A: Bind to localhost or a private interface whenever possible

If a service is only used by on-host processes, a reverse proxy, or an SSH tunnel, bind it to 127.0.0.1.
This is cleaner than firewall rules because the kernel never exposes the socket publicly.

In Docker Compose, that means:
publish as 127.0.0.1:PORT:PORT.
It’s boring. It’s effective. You can still put Nginx/Traefik/Caddy in front to handle public traffic deliberately.

Pattern B: Use DOCKER-USER as the policy gate for published ports

Think of DOCKER-USER as your “security team chain.”
Put default drops there for traffic entering from public interfaces to docker bridges,
then add explicit allows for what should be reachable.

The safe default on an internet-facing host is:

  • Allow established/related (or let Docker handle it).
  • Allow explicitly required published ports from explicitly required sources.
  • Drop the rest from public interface(s) to docker bridge(s).

Pattern C: Put a real edge in front of Docker hosts

Host firewalls are good, but they’re not a substitute for network-layer guardrails.
If you can use a cloud security group, a dedicated firewall appliance, or even a separate ingress node, do it.
Defense in depth is not a slogan; it’s what keeps a “one bad Compose file” from becoming your week.

Pattern D: Prefer reverse proxy ingress over publishing every service

If every container publishes its own port to the world, you’ve built a port zoo.
A reverse proxy centralizes TLS, auth, and exposure decisions. It also makes scanning output less exciting.

Joke #2: If you publish -p 0.0.0.0:2375:2375 for the Docker API, congratulations—you’ve invented remote root.

Three corporate mini-stories from the trenches

1) The incident caused by a wrong assumption

A mid-sized SaaS company migrated a legacy app into containers on a handful of VPS hosts.
The migration plan was reasonable: keep the footprint small, keep the costs small, and rely on UFW because “we already use it everywhere.”
The security review was a checklist exercise. UFW: enabled. Default deny: enabled. Ship it.

A few weeks later, a new internal tool shipped as a container with -p 8080:8080.
It was meant for internal admin use, accessed via VPN. The engineer assumed UFW would block non-VPN traffic because “deny incoming.”
It didn’t. The port was reachable from the public IP. Not loudly, not dramatically—just reachable.

The first signal wasn’t an alert; it was a surprise bill for outbound traffic and a complaint that the tool was “slow.”
Someone had found the endpoint and was brute-forcing credentials. The tool’s auth was decent, but not designed for the open internet.
Logs showed lots of failed attempts and a few successful ones from commodity IP space.

The postmortem wasn’t about blaming Docker or UFW.
It was about the mental model gap: the team treated containers like processes “inside the host” rather than endpoints reached via forwarding and DNAT.
Once they enforced a default drop in DOCKER-USER and rebound internal ports to localhost, the class of failure vanished.

The uncomfortable takeaway: “Firewall is on” is not a security control. A tested policy is a security control.

2) The optimization that backfired

An enterprise team ran dozens of containerized services per node and wanted “faster networking.”
Someone decided Docker’s iptables programming was “overhead” and turned off Docker’s iptables integration in the daemon config.
The idea was to manage all firewalling in UFW and keep the system “clean.”

The first few days looked fine, because most east-west traffic was on the same host and cached conntrack state masked some issues.
Then a routine host reboot happened—patching window, no drama.
Suddenly, a subset of services became unreachable from other hosts, while some published ports behaved inconsistently.

The team spent hours chasing ghosts: was it DNS? was it overlay networking? did the bridge change names?
The actual root cause was simpler: with Docker not managing iptables, the NAT and forward rules weren’t being set up reliably after restarts,
and the bespoke UFW rules didn’t recreate Docker’s required chain plumbing.

They reverted the change, then implemented the correct control point: DOCKER-USER for policy,
Docker-managed iptables for mechanics. That split of responsibilities is the sane compromise:
let Docker do the plumbing; you decide what’s allowed through the pipes.

3) The boring but correct practice that saved the day

A financial-services platform ran container workloads on hardened Ubuntu images.
Nothing fancy. Their secret weapon was a dull operational discipline: every host had a standard “exposure audit” job.
It ran nightly, dumped docker ps port mappings, ss -lntp listeners, and a filtered view of iptables -S into a central log index.

One afternoon, a developer merged a Compose change that accidentally published a debug endpoint on 0.0.0.0:6060.
The service wasn’t “vulnerable” in the CVE sense; it was just not meant to be reachable from the world.
Within an hour, the audit diff triggered an alert: new public listener, not in the allowed list.

The on-call didn’t need to argue with anyone. They had proof: a new listener and a new DNAT rule.
They reverted the Compose change, redeployed, and the alert cleared.
No customer impact. No panic patching. No “how did this happen?” meeting that produces nothing but calendar invites.

The lesson isn’t “audits are cool.” The lesson is that boring, repeated verification beats one-time confidence.
This is what “reliability work” looks like when it’s actually working.

Common mistakes: symptoms → root cause → fix

1) “UFW deny incoming, but my container port is still reachable”

Symptoms: External scan reaches HOST:published_port even though UFW blocks it on paper.

Root cause: Traffic is forwarded (FORWARD chain) after DNAT, not delivered to INPUT. Docker rules accept it.

Fix: Add policy in DOCKER-USER (drop by default from public IF to docker bridge; allow explicitly). Or bind the port to localhost/private IP.

2) “I added UFW rules to block the port, nothing changed”

Symptoms: ufw deny 8080/tcp has no effect on published ports.

Root cause: The block applies to INPUT; the traffic is being DNAT’d and forwarded.

Fix: Block in DOCKER-USER or disable publication on 0.0.0.0.

3) “Everything broke after enabling strict firewalling”

Symptoms: Containers can’t reach the internet; inter-container networking fails; DNS inside containers is flaky.

Root cause: Overly broad DROP rules in FORWARD/DOCKER-USER without allowing established connections or required egress paths.

Fix: Keep Docker’s forwarding mechanics, but apply targeted policy: allow established/related, allow required bridge traffic, then drop public ingress to docker bridges.

4) “My rules worked until reboot”

Symptoms: After restart, ports are open again.

Root cause: Ad-hoc iptables rules weren’t persisted; Docker rebuilt its rules; yours vanished.

Fix: Persist rules via iptables-persistent/netfilter-persistent or codify via configuration management. Validate on reboot with a test.

5) “I blocked container exposure, but localhost access also broke”

Symptoms: Reverse proxy on the host can’t reach backend containers; health checks fail.

Root cause: DOCKER-USER drop rule matches too broadly (drops host-originated or internal interface traffic).

Fix: Scope rules by interface (-i ens3 -o docker0) and/or source ranges; keep host-to-bridge traffic allowed.

6) “UFW and Docker fight; rules look duplicated and weird”

Symptoms: Lots of chains; confusion; inconsistent behavior across hosts.

Root cause: Mixed nftables/iptables backends, distro differences, or multiple tools managing firewall state.

Fix: Pick one control plane. Verify whether you’re using iptables-nft compatibility. Standardize images. Test the actual packet path, not the intent.

Checklists / step-by-step plan

Plan 1: Lock down an existing Docker host safely (production-friendly)

  1. Inventory exposure: list published ports and listeners.

    • Use docker ps and ss -lntp.
    • Decision: which ports should be public, private, or localhost-only?
  2. Identify public interfaces: don’t guess.

    • Use ip -br addr.
    • Decision: which interface(s) should be able to reach containers?
  3. Confirm Docker chain order: ensure DOCKER-USER is referenced early.

    • Use iptables -S FORWARD.
    • Decision: if DOCKER-USER isn’t present, you’re in a nonstandard setup—fix that before proceeding.
  4. Implement a narrow drop rule for public ingress to docker bridges.

    • Start with interface-scoped drop: -i public -o docker0.
    • Decision: add allows above it for required ports/sources.
  5. Test from outside and watch counters.

    • Use nc -vz from another machine; check iptables counters.
    • Decision: keep iterating until only intended access succeeds.
  6. Persist and automate.

    • Use iptables-persistent or config management.
    • Decision: add a CI/CD gate or nightly audit to catch new exposures.

Plan 2: Build new hosts with “no accidental exposure” as a default

  1. Decide an ingress strategy: one reverse proxy or a small set of public ports.
  2. Require explicit bind addresses in Compose for anything not meant to be public (127.0.0.1:...).
  3. Ship a default DOCKER-USER policy: drop public IF to docker bridge; allow only the proxy ports.
  4. Add an exposure audit job (listeners + published ports + iptables diff).
  5. Test reboot behavior: firewall persistence and Docker restart ordering.

FAQ

1) Why doesn’t “ufw deny 8080/tcp” block Docker’s published port?

Because the traffic is DNAT’d and forwarded to a container. It’s not handled by INPUT the way a host process would be.
You need policy in FORWARD/DOCKER-USER or remove the public bind.

2) Is Docker “bypassing” UFW on purpose?

Docker programs iptables to implement NAT and forwarding automatically. That can circumvent your expectations, but it’s not a stealth feature.
The intended admin hook is DOCKER-USER.

3) Should I disable Docker’s iptables management?

Usually no. If you disable it, you become responsible for NAT, forwarding, isolation, and edge cases across restarts.
Keep Docker’s plumbing; enforce your policy in DOCKER-USER and by binding ports carefully.

4) What’s the safest default DOCKER-USER rule set?

For internet-facing hosts: allow explicit published ports from explicit sources, then drop traffic from the public interface(s) to docker bridges.
Keep host-local and internal traffic unblocked unless you have a reason.

5) Can I fix this purely with UFW?

You can, but it’s fragile unless you deeply understand how UFW wires into FORWARD and how Docker inserts its chains.
The operationally safer path is using DOCKER-USER for container ingress policy, plus UFW for host INPUT.

6) Why does binding to 127.0.0.1 work so well?

Because it removes the exposure at the socket layer. No packet filter heroics required.
It’s the difference between “blocked” and “not reachable.”

7) What about IPv6?

If IPv6 is enabled, you must apply equivalent policy for ip6tables/nft.
Otherwise you “lock down” IPv4 and accidentally leave IPv6 wide open. Audit both stacks.

8) Why do I see docker-proxy listening sometimes, and sometimes not?

Docker’s userland proxy behavior has changed over time and can be toggled. Even without docker-proxy,
iptables DNAT can still publish ports. Always verify with both socket listing and iptables rules.

9) If I use a reverse proxy container, do I still need DOCKER-USER rules?

Yes, if you want guardrails. The proxy reduces surface area, but one accidental -p on a backend can still expose it.
DOCKER-USER makes that mistake non-fatal.

Next steps you should do today

Stop treating “UFW enabled” as a security outcome. On a Docker host, it’s a starting condition.
Your real job is to make exposure deliberate: bind what can be bound to localhost, and gate the rest in DOCKER-USER.

  1. Run the inventory: docker ps, ss -lntp, external check.
  2. Inspect rule order: confirm DOCKER-USER is jumped to early from FORWARD.
  3. Implement a default drop from public interfaces to docker bridges in DOCKER-USER, then allow only what you mean.
  4. Persist the rules and test reboot behavior.
  5. Add a boring nightly exposure audit. The boring stuff is what keeps you employed.
← Previous
IKEv2/IPsec: When It’s a Better Choice Than WireGuard or OpenVPN
Next →
Debian 13: Nginx suddenly returns 403/404 — permissions vs config, how to tell instantly

Leave a comment