Ubuntu 24.04: Docker + UFW = Surprise Open Ports — Close the Hole Without Breaking Containers

Was this helpful?

You lock down a host with UFW, open only SSH, deploy a few containers, and go home. Then a scanner (or worse, a customer) finds your service listening on the public internet anyway. Nothing makes you trust a firewall less than a firewall that’s technically doing what it was told—just not what you meant.

On Ubuntu 24.04, Docker can still punch holes around UFW in ways that surprise even competent operators. This isn’t a “Docker is insecure” rant. It’s about understanding packet flow, the iptables/nftables interface layer, and the specific hooks Docker uses—then placing the right controls so containers keep working and ports stop wandering onto the internet.

What’s actually happening: why UFW “works” and ports still open

UFW is not a firewall. UFW is a friendly librarian that files firewall rules into the right drawers. The actual bouncer at the door is netfilter (iptables/nftables). Docker, meanwhile, is the VIP manager who walks up to the bouncer and adds a few “let these people in” notes—often in a drawer UFW isn’t watching.

When you publish a port with Docker (-p 0.0.0.0:8080:80 or a Compose ports: stanza), Docker installs NAT and filter rules so traffic hitting the host’s public interface on that port gets DNAT’d to the container’s IP. Those rules are inserted into chains like PREROUTING (nat table) and FORWARD (filter table), and Docker also manages its own chains such as DOCKER, DOCKER-ISOLATION-STAGE-*, and crucially DOCKER-USER.

Where does UFW fit? UFW generally manages rules in chains like ufw-before-input, ufw-user-input, ufw-before-forward, etc. It can block host-local traffic to services bound to the host, but published container ports often traverse the FORWARD path after DNAT, and Docker has already allowed them. So UFW can say “deny 8080/tcp” all day and still watch packets get forwarded to a container because that denial is applied in a different chain/order than you assumed.

Ubuntu 24.04 adds another layer of operator confusion: modern distributions increasingly use nftables as the underlying engine, but still expose an iptables-compatible interface. Docker is typically still programming iptables rules (via iptables-nft on Ubuntu), which appear in the nft ruleset. UFW also programs rules, and the interaction is “who gets evaluated first” rather than “who is more correct.” Firewalls are deterministic; operator assumptions are not.

If you remember one rule: when Docker publishes a port, treat it like opening a port in your firewall, because that’s effectively what it is—just not in the same place you were looking.

Short joke #1: Firewalls are like office doors—anyone can get in if the person with admin access keeps propping them open “for convenience.”

Interesting facts and a little history (so the behavior makes sense)

  1. Docker chose iptables early because it was universal. In the mid-2010s, Linux networking was fragmented; iptables was the least-worst common denominator for NAT and forwarding.
  2. UFW is primarily an iptables rule generator. It’s a policy tool, not a runtime packet filter. It writes rules; it does not “own” netfilter.
  3. Published ports use DNAT, not just listening sockets. That’s why ss -lntp can show Docker-proxy or a bound port, but the real magic is NAT + forwarding.
  4. Docker historically used a userland proxy for port publishing. Newer Docker versions prefer kernel NAT when possible, but behavior differs by version and settings; this changes what you see in ss.
  5. The DOCKER-USER chain exists specifically so you can override Docker. Docker added it after years of operators asking for a stable hook point that Docker wouldn’t rewrite.
  6. UFW’s “deny” doesn’t automatically apply to forwarded traffic. UFW defaults are often tuned around INPUT (to the host), not FORWARD (through the host to containers).
  7. Ubuntu’s iptables is often the nft backend. Many operators still think in iptables terms; under the hood, nft is executing the rules (and priorities matter).
  8. Cloud security groups can hide the problem. In tightly controlled VPCs, the host firewall might be redundant; move the same host on-prem and the surprise becomes a headline.
  9. Rootless Docker changes the picture. With rootless networking, the port exposure and filtering path can differ significantly; you can’t blindly apply the same iptables recipe.

None of this is obscure trivia. It’s why the “but UFW is enabled” argument collapses during an incident review.

One quote, paraphrased idea: Dr. Richard Cook (resilience engineering) has a well-known idea: failures happen when normal work and complexity collide, not because people are careless.

Fast diagnosis playbook (check first/second/third)

When someone says “UFW is on but the port is open,” don’t debate. Don’t guess. Run a short, repeatable playbook and decide based on evidence.

First: confirm what is actually exposed

  • From another machine: scan the host’s public IP for the reported port(s).
  • On the host: check listening sockets and Docker published ports.
  • Decision: is it a host process, a container publish, or a load balancer/NAT upstream?

Second: map the exposure path

  • Find the container and published port mapping.
  • Check whether the traffic is INPUT (host) or FORWARD (to container via DNAT).
  • Decision: do you need to block at DOCKER-USER, adjust UFW forwarding policy, or change Docker publish bindings?

Third: inspect rule order, not just rule presence

  • List iptables/nft rules with line numbers/counters.
  • Look for Docker’s ACCEPT rules preceding UFW’s drops in the relevant chain.
  • Decision: place enforcement in DOCKER-USER (preferred), or restructure UFW’s forward handling explicitly.

Fourth: apply a minimal fix, then re-test from outside

  • Start with “default deny for published ports” in DOCKER-USER, then allow only what you need.
  • Re-run the external scan and confirm container health checks still pass.
  • Decision: if production traffic breaks, roll back and switch to “bind published ports to trusted IPs” as a safer intermediate.

Practical tasks: commands, outputs, and what decisions to make

These are the tasks I actually run during triage and hardening. Each one has: a command, realistic output, what it means, and the decision you make.

Task 1: Confirm UFW status and baseline policy

cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere

What it means: UFW is enabled, default incoming is deny, but routed (forwarded) traffic is disabled. Docker-published traffic commonly rides FORWARD, not INPUT.

Decision: Don’t assume “deny incoming” covers containers. You must check forwarding behavior and Docker rules.

Task 2: Check which ports are listening on the host

cr0x@server:~$ sudo ss -lntp
State  Recv-Q Send-Q Local Address:Port  Peer Address:Port Process
LISTEN 0      4096   0.0.0.0:22         0.0.0.0:*     users:(("sshd",pid=1186,fd=3))
LISTEN 0      4096   0.0.0.0:8080       0.0.0.0:*     users:(("docker-proxy",pid=4123,fd=4))
LISTEN 0      4096   127.0.0.1:9090     0.0.0.0:*     users:(("prometheus",pid=2201,fd=7))

What it means: Port 8080 is bound on all interfaces via docker-proxy. That’s a strong hint the exposure is container-related, not a random host daemon.

Decision: Identify which container published 8080 and whether it should be public.

Task 3: List Docker published ports clearly

cr0x@server:~$ sudo docker ps --format "table {{.Names}}\t{{.Image}}\t{{.Ports}}"
NAMES           IMAGE                 PORTS
webapp          ghcr.io/acme/web:1.7  0.0.0.0:8080->80/tcp
redis           redis:7               6379/tcp
metrics-gw      prom/pushgateway      0.0.0.0:9091->9091/tcp

What it means: webapp and metrics-gw are published publicly. Redis is not (container-only port exposure, not host).

Decision: If these services should be private, fix the bindings and/or enforce firewall policy.

Task 4: Inspect a container’s network settings

cr0x@server:~$ sudo docker inspect webapp --format '{{json .NetworkSettings.Ports}}'
{"80/tcp":[{"HostIp":"0.0.0.0","HostPort":"8080"}]}

What it means: Explicitly published to all interfaces. That’s the “surprise open port” in one JSON line.

Decision: Either bind to a specific IP (like 127.0.0.1) or enforce DOCKER-USER policy for exposure control.

Task 5: Identify the host’s public interface and IPs

cr0x@server:~$ ip -br addr
lo               UNKNOWN        127.0.0.1/8 ::1/128
ens3             UP             203.0.113.10/24 fe80::5054:ff:fe12:3456/64
docker0          DOWN           172.17.0.1/16

What it means: Public IP is on ens3. Docker bridge is docker0. Knowing interfaces matters for targeted rules.

Decision: If you only want local access, bind ports to 127.0.0.1 or a private interface, not 0.0.0.0.

Task 6: Check UFW’s routed/forward policy and kernel forwarding

cr0x@server:~$ sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

What it means: Forwarding is enabled (Docker typically enables it). UFW’s “routed disabled” doesn’t stop the kernel from forwarding if rules allow it.

Decision: Treat FORWARD filtering as mandatory on container hosts.

Task 7: Look at iptables FORWARD chain order (where the truth lives)

cr0x@server:~$ sudo iptables -S FORWARD
-P FORWARD DROP
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT

What it means: Docker inserts DOCKER-USER at the top. That’s your control lever. If you don’t use it, Docker’s own ACCEPT rules will decide.

Decision: Put restrictive policy into DOCKER-USER, not random UFW INPUT rules.

Task 8: See the DOCKER-USER chain (often empty by default)

cr0x@server:~$ sudo iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -j RETURN

What it means: No restrictions exist; everything passes to Docker’s rules after RETURN.

Decision: Add explicit allow/deny here to control exposure of published ports.

Task 9: Inspect NAT rules that perform the port forwarding

cr0x@server:~$ sudo iptables -t nat -S DOCKER | sed -n '1,8p'
-N DOCKER
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:80
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 9091 -j DNAT --to-destination 172.17.0.3:9091

What it means: Traffic to host port 8080 is DNAT’d to the container IP. This is why “closing the port” must happen in the forwarding path too.

Decision: Don’t fight NAT with INPUT drops. Control forwarding with DOCKER-USER (or change the publish binding).

Task 10: Verify from outside (because local checks lie)

cr0x@server:~$ nc -vz 203.0.113.10 8080
Connection to 203.0.113.10 8080 port [tcp/http-alt] succeeded!

What it means: The port is reachable from the network perspective. If you run this from the same host, you can get false confidence via loopback routing.

Decision: Treat external validation as required, not optional.

Task 11: Apply a “default deny” policy for container forwarding in DOCKER-USER

cr0x@server:~$ sudo iptables -I DOCKER-USER 1 -i ens3 -o docker0 -j DROP
cr0x@server:~$ sudo iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -i ens3 -o docker0 -j DROP
-A DOCKER-USER -j RETURN

What it means: Traffic coming from the public interface and forwarding into docker0 is dropped before Docker’s allow rules run.

Decision: If you want “nothing published is reachable from the internet unless explicitly allowed,” this is the right default posture.

Task 12: Add a specific allow exception (only what you mean to expose)

cr0x@server:~$ sudo iptables -I DOCKER-USER 1 -i ens3 -o docker0 -p tcp --dport 80 -j ACCEPT
cr0x@server:~$ sudo iptables -L DOCKER-USER -n --line-numbers
Chain DOCKER-USER (1 references)
num  target   prot opt source      destination
1    ACCEPT   tcp  --  0.0.0.0/0   0.0.0.0/0   tcp dpt:80
2    DROP     all  --  0.0.0.0/0   0.0.0.0/0
3    RETURN   all  --  0.0.0.0/0   0.0.0.0/0

What it means: You’re allowing forwarded traffic to destination port 80 (container-side port after DNAT) while dropping everything else from ens3 to docker0.

Decision: Maintain this allowlist intentionally. If you can’t explain each allowed port in one sentence, it shouldn’t be allowed.

Task 13: Make UFW play nicer with forwarding (only if you insist on UFW-centric policy)

cr0x@server:~$ sudo grep -n '^DEFAULT_FORWARD_POLICY' /etc/default/ufw
19:DEFAULT_FORWARD_POLICY="DROP"

What it means: UFW is set to drop forwarded traffic by default (good), but Docker may have rules that still allow specific forwards.

Decision: Keep it DROP. If it’s ACCEPT, change it back unless you enjoy surprise exposures.

Task 14: Check whether UFW is managing the DOCKER-USER chain (usually it isn’t)

cr0x@server:~$ sudo iptables -S | grep -E 'ufw|DOCKER-USER' | sed -n '1,12p'
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-N ufw-before-forward
-N ufw-user-forward

What it means: UFW and Docker coexist, but UFW isn’t inserting policy into DOCKER-USER by default. That’s why “ufw deny 8080” didn’t help.

Decision: Decide who owns container-forward policy. My vote: DOCKER-USER, managed via config management, not hand edits.

Task 15: Persist iptables changes across reboot (because reboots happen at 3am)

cr0x@server:~$ sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease
Reading package lists... Done
cr0x@server:~$ sudo apt-get install -y iptables-persistent
Setting up iptables-persistent (1.0.20) ...
Saving current rules to /etc/iptables/rules.v4...
Saving current rules to /etc/iptables/rules.v6...

What it means: Your current iptables rules were saved to disk and will be restored on boot.

Decision: If you rely on DOCKER-USER rules, persist them. Otherwise, the “fix” evaporates after maintenance.

Task 16: Re-test exposure after applying DOCKER-USER policy

cr0x@server:~$ nc -vz 203.0.113.10 8080
nc: connect to 203.0.113.10 port 8080 (tcp) failed: Connection timed out

What it means: The port is no longer reachable from outside (timeout is typical for a drop). This is the outcome you wanted.

Decision: Verify required public services still work, then document the policy so the next deploy doesn’t “fix” it back.

Fix strategies that close the hole without breaking containers

You have three sane strategies. Pick one deliberately. Mixing them ad hoc is how you end up with rules that only work when the moon is in the right phase.

Strategy A (recommended): Use DOCKER-USER as the enforcement point

Docker promises not to manage your DOCKER-USER rules. That’s the whole point of the chain. If you want “Docker can do Docker things, but security policy stays mine,” DOCKER-USER is where you assert it.

Model: default drop for traffic from public interface(s) into docker0; add allow rules for specific destination ports or source CIDRs; leave internal east-west alone.

Pros: Stable, explicit, survives container churn, doesn’t depend on UFW’s idea of forwarding. Works with Compose, Swarm-ish patterns, and normal bridge networking.

Cons: Another place to manage policy. You must persist rules and ensure config management owns them.

Strategy B: Stop publishing to 0.0.0.0 (bind to specific IPs)

If a service is only meant to be accessed locally or through a reverse proxy, don’t publish it broadly.

In Compose, instead of:

cr0x@server:~$ cat compose.yaml | sed -n '1,20p'
services:
  webapp:
    image: ghcr.io/acme/web:1.7
    ports:
      - "8080:80"

Use explicit bindings:

cr0x@server:~$ cat compose.yaml | sed -n '1,20p'
services:
  webapp:
    image: ghcr.io/acme/web:1.7
    ports:
      - "127.0.0.1:8080:80"

Pros: Simple mental model. No port is reachable externally unless you say so. Great when you front containers with Nginx/Traefik/Caddy on the host.

Cons: Easy to regress (“just remove 127.0.0.1 for a test”), and it doesn’t help if you truly need external access but only from specific networks.

Strategy C: Make UFW handle routed traffic explicitly (advanced, brittle)

You can force more policy into UFW’s forward chains and rely on UFW route rules. This can work, but you’re fighting the fact that Docker has its own worldview and updates rules when containers start/stop.

If you go this route, you must:

  • Keep UFW forward policy DROP
  • Use ufw route rules intentionally
  • Audit Docker rule insertion after every Docker upgrade

Personally, I prefer DOCKER-USER because it’s designed for exactly this. UFW is great for “host services” and broad policy; it’s not great as the single source of truth for container forwarding unless you love debugging on weekends.

Short joke #2: NAT is the networking equivalent of a shared spreadsheet—everyone relies on it, nobody trusts it, and it’s always doing something you didn’t authorize.

Three corporate mini-stories from the trenches

Mini-story #1: The incident caused by a wrong assumption

The company had a standard hardening checklist: enable UFW, allow SSH from the VPN, deny everything else. Teams were encouraged to use containers for internal tools, mostly because it made upgrades easier. A platform engineer baked UFW into the base image and called it “guardrails.”

A product team deployed a new internal dashboard container and published it on -p 8080:80 so they could check it quickly. They assumed “UFW will block it from the internet.” It didn’t. The service was reachable from anywhere, and within a day someone outside the company was poking at it.

The post-incident review was uncomfortable because nobody did anything outrageous. The engineer wasn’t negligent; they simply projected a host-firewall mental model onto container forwarding. The SRE on call reproduced it immediately: UFW denied 8080/tcp on INPUT, but the traffic never needed INPUT after DNAT.

The fix was two lines in DOCKER-USER plus a rule in their Compose conventions: “no published ports without an explicit IP binding.” The cultural fix was better: they updated the checklist to include an external scan and a rule-order audit whenever Docker was installed.

Mini-story #2: The optimization that backfired

Another org had performance pain on a busy edge host. Someone decided to “simplify networking” by aggressively disabling components they thought were redundant. They reduced firewall logging, removed some chains they didn’t recognize, and tried to make UFW the only tool in play.

It worked—until a Docker upgrade reintroduced its chains and reordered parts of the FORWARD path. Suddenly, a service that was meant to be reachable only from an internal subnet became reachable from broader networks. The operator swore nothing changed “in the firewall,” which was technically true: UFW config hadn’t changed. Docker’s rules had.

The actual backfire wasn’t just exposure. Debugging got harder because their “optimization” removed the breadcrumbs: counters were reset, logs were quieter, and there was no established place to put policy that Docker wouldn’t overwrite. They had to re-learn rule flow under pressure.

They recovered by doing the boring thing they originally tried to avoid: define a single host firewall policy document, enforce it via DOCKER-USER, and keep UFW for host-local services. Performance impact was negligible compared to the time they lost in incident response.

Mini-story #3: The boring but correct practice that saved the day

A financial-services team ran container hosts with a strict change process. It wasn’t glamorous. Every host had a small “network invariants” script that ran in CI and again on the host after deploy: list published ports, diff iptables chains, run an external connectivity test from a controlled scanner subnet.

One Friday afternoon, a developer updated a Compose file and accidentally changed a port mapping from 127.0.0.1:9000:9000 to 9000:9000. On their laptop it made life easier. In production it would have exposed an admin console.

The invariants test failed in CI because their scanner could reach port 9000 on the staging host. The pipeline stopped the deploy. Nobody had to be a hero, and nobody had to pretend they “would have caught it in review.” The script caught it because it was designed to catch exactly that class of mistake.

They fixed the Compose binding, reran the test, and shipped. It was not exciting. That’s the point.

Common mistakes: symptoms → root cause → fix

1) “UFW denies 8080 but it’s still reachable”

Symptom: ufw status shows no allow rule, yet external scans connect.

Root cause: Traffic is being DNAT’d and forwarded to a container; UFW INPUT rules don’t stop FORWARD traffic that Docker has allowed.

Fix: Enforce policy in DOCKER-USER (drop public-to-docker0) and then allow only required ports; or bind ports to 127.0.0.1.

2) “I dropped traffic in DOCKER-USER and now everything broke”

Symptom: Containers can’t reach the internet, or internal services can’t talk.

Root cause: Overly broad DOCKER-USER rule dropped all forwarding, not just public ingress to docker0. Common mistake: dropping without interface scoping.

Fix: Scope rules: -i ens3 -o docker0 for ingress to containers, and leave -i docker0 -o ens3 (egress) alone unless you really mean it.

3) “After reboot, ports are open again”

Symptom: You fixed it yesterday; today it’s back.

Root cause: DOCKER-USER rules were added interactively and not persisted; Docker then recreated its own rules at startup.

Fix: Persist rules with iptables-persistent or manage them via systemd unit/config management that runs after Docker starts.

4) “Only some published ports are blocked; others sneak through”

Symptom: Port 8080 is blocked, but 9091 is still reachable.

Root cause: Your allowlist/denylist is based on host ports, but your DOCKER-USER match is on post-DNAT destination port (container port). Or vice versa.

Fix: Decide what you match on. In DOCKER-USER, matching --dport often refers to the container port after DNAT on the FORWARD path. Validate with counters and test each port.

5) “UFW route allow didn’t do anything”

Symptom: You add a UFW route rule, but connectivity doesn’t change.

Root cause: Rule order: Docker’s FORWARD accept rules can still allow traffic before UFW’s route chains, depending on how your rules are inserted and which chains are hit.

Fix: Prefer DOCKER-USER for container ingress control. If you must use UFW routing, confirm chain ordering with iptables -S FORWARD and counters.

6) “It’s closed from the outside, but monitoring inside says it’s open”

Symptom: Internal health checks succeed; external checks fail; someone calls it a false positive.

Root cause: Different path: internal checks may come from a private interface, a VPN, or loopback, not the public interface you’re filtering.

Fix: Validate from the same network perspective as the threat model (internet/VPC boundary). Write rules that explicitly allow internal/VPN sources and drop public sources.

Checklists / step-by-step plan

Step-by-step hardening plan (do this on every Docker host)

  1. Inventory exposure: list published ports and who owns them (docker ps, ss).
  2. Decide policy: which services are public, which are private, and which must be VPN-only.
  3. Set a default stance: default drop of public interface → docker bridge traffic in DOCKER-USER.
  4. Add explicit allows: only for services meant to be reachable publicly (or from specific CIDRs).
  5. Bind private services: change Compose/Docker run to publish to 127.0.0.1 or a private IP.
  6. Persist rules: ensure DOCKER-USER rules survive reboot and Docker restarts.
  7. Re-test externally: run a port check from outside your host network.
  8. Write invariants: add CI checks that fail if a Compose file publishes to 0.0.0.0 without approval.
  9. Operationalize audits: periodic scan + diff of firewall rules and published ports.

Minimal “secure-by-default” DOCKER-USER recipe

This is the baseline I like for internet-facing hosts with Docker bridge networking. Adjust interface names and ports.

cr0x@server:~$ sudo iptables -I DOCKER-USER 1 -i ens3 -o docker0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
cr0x@server:~$ sudo iptables -I DOCKER-USER 2 -i ens3 -o docker0 -p tcp --dport 443 -j ACCEPT
cr0x@server:~$ sudo iptables -I DOCKER-USER 3 -i ens3 -o docker0 -p tcp --dport 80 -j ACCEPT
cr0x@server:~$ sudo iptables -I DOCKER-USER 4 -i ens3 -o docker0 -j DROP
cr0x@server:~$ sudo iptables -L DOCKER-USER -n --line-numbers
Chain DOCKER-USER (1 references)
num  target     prot opt source      destination
1    ACCEPT     all  --  0.0.0.0/0   0.0.0.0/0   ctstate RELATED,ESTABLISHED
2    ACCEPT     tcp  --  0.0.0.0/0   0.0.0.0/0   tcp dpt:443
3    ACCEPT     tcp  --  0.0.0.0/0   0.0.0.0/0   tcp dpt:80
4    DROP       all  --  0.0.0.0/0   0.0.0.0/0
5    RETURN     all  --  0.0.0.0/0   0.0.0.0/0

What it does: Allows established flows and only forwards new public ingress to containers on 80/443. Everything else from the public interface into docker0 is dropped.

What it does not do: It doesn’t secure your containers internally, it doesn’t replace TLS, and it doesn’t fix application auth. It just stops accidental exposure from becoming policy.

FAQ

1) Why does Docker “bypass” UFW?

It’s not bypassing so much as using a different packet path. Published container ports typically use NAT and the FORWARD chain; UFW rules you set often target INPUT. Different chains, different outcomes.

2) Is this specific to Ubuntu 24.04?

No, but Ubuntu 24.04’s common nftables backend and modern defaults make the interaction easier to misunderstand. The core behavior exists anywhere Docker manages iptables rules.

3) Should I disable Docker’s iptables management?

Usually no. Disabling Docker’s iptables integration can break networking and port publishing unless you fully replace its rules yourself. If you’re asking, you probably don’t want that maintenance burden.

4) What’s the safest quick fix during an incident?

Add a scoped DROP in DOCKER-USER for public interface → docker0, then add explicit ACCEPTs for any ports you must keep public. Re-test from outside immediately.

5) If I bind 127.0.0.1:8080:80, is that enough?

It’s a strong control for “host-only access,” especially when a reverse proxy terminates external traffic. But it doesn’t help if you need the service reachable from a private subnet/VPN without a proxy—then you’ll want DOCKER-USER allow rules by source CIDR.

6) Does this affect IPv6 too?

Yes, and people forget it. If IPv6 is enabled and Docker publishes on IPv6, you need equivalent policy in ip6tables/nft for v6. Don’t assume v4 rules cover v6 exposure.

7) Why not just rely on cloud security groups?

Security groups are great, but they’re not always present (on-prem), and they don’t protect you from internal lateral movement the same way. Also: operators routinely copy workloads between environments. Host policy should be correct on its own.

8) How do I avoid regressions when teams change Compose files?

Enforce conventions: no naked "8080:80" for internal services; require explicit IP binding or a review label. Add CI that parses Compose and fails when a port is published to 0.0.0.0 unexpectedly.

9) Will DOCKER-USER rules break container-to-container traffic?

Not if you scope them properly. Focus on ingress from the public interface into docker0. Leave traffic originating from docker0 alone unless you have a specific egress control requirement.

Conclusion: practical next steps

If you run Docker on Ubuntu 24.04 and you assume UFW alone controls exposure, you’ve built a trap for your future self. The fix is not dramatic: understand that published container ports live in the forwarding/NAT path, then enforce policy where Docker gives you a stable hook—DOCKER-USER.

Next steps that won’t waste your time:

  1. Run the inventory tasks: ss, docker ps, and an external connectivity test.
  2. Add a default drop for public interface → docker0 in DOCKER-USER, then allow only what should be public.
  3. Change internal services to bind published ports to 127.0.0.1 (or a private IP) so accidents don’t become exposures.
  4. Persist your rules and add a regression test in CI. Boring. Correct. Effective.

You’re not trying to “win” against Docker. You’re trying to make your intent unambiguous to the packet filter. That’s the whole game.

← Previous
Debian 13 minimal firewall profile: what to allow and what to drop (no paranoia)
Next →
MariaDB vs Percona Server Replication: Where Edge Cases Bite

Leave a comment