Ubuntu 24.04: UFW + Docker — lock down containers without breaking Compose (case #40)

Was this helpful?

You turned on UFW. You denied everything inbound. You felt responsible. Then you ran docker compose up
and—surprise—your container is reachable from the internet anyway. Not “maybe”. Not “only from the LAN”.
Reachable. From. Everywhere.

This is one of those Linux networking realities that keeps happening because the defaults are optimized for “it works”
rather than “it’s safe”. The good news: you can lock it down on Ubuntu 24.04 without breaking Docker Compose,
without replacing UFW, and without turning your host into a bespoke firewall science project.

The mental model: why UFW and Docker fight

UFW is a front-end. It writes rules to the kernel firewall (on Ubuntu 24.04 that’s typically nftables underneath,
but many tools still speak “iptables semantics”). Docker is also a front-end. It writes firewall rules to make
container networking “just work”: NAT for outbound, port publishing for inbound, and isolation between bridges.

The fight happens because Docker inserts rules in places UFW doesn’t control, and at priorities that beat your
high-level “deny incoming” posture. When you publish a port (-p 8080:80 or Compose ports:),
Docker programs DNAT and filter rules so packets get forwarded to the container. Those packets may never hit the
rule you thought would block them. UFW says “deny”; Docker says “I promised that port would work”; Docker wins.

The practical takeaway: you don’t “fix” this by adding more UFW allow and deny statements. You fix it by
controlling the specific path Docker uses for forwarded traffic. That means: understand the FORWARD path,
understand Docker’s chains (especially DOCKER-USER), and decide what should be reachable from where.

One dry truth: the firewall is not “inbound vs outbound”. It’s traffic direction plus routing decisions. Containers
are not local processes; they sit behind a virtual router. So your “incoming” policy isn’t necessarily applied
to traffic being forwarded to them.

One quote to keep you honest, from the reliability world: “Hope is not a strategy.” — Gene Kranz.

Joke #1: Docker networking is like a hotel key card: it’s convenient until you realize it also opens the “staff only”
door.

Facts and historical context you can use in arguments

  • UFW dates back to the iptables era and still thinks in those terms, even when nftables is the backend.
  • Docker historically relied on iptables to implement NAT and published ports; that design assumption persists across distros.
  • Linux netfilter has multiple hooks (PREROUTING, INPUT, FORWARD, OUTPUT, POSTROUTING). UFW “deny incoming” mostly targets INPUT, not FORWARD.
  • Docker publishes ports with DNAT; packets aimed at the host’s IP can be rewritten to a container IP before UFW’s INPUT chain is even relevant.
  • The DOCKER-USER chain exists specifically so operators can insert filtering rules that apply before Docker’s own accept rules.
  • UFW default policies changed over the years, but the classic pattern remains: default deny on INPUT, allow established/related, manage specific allows.
  • Ubuntu moved to nftables as the preferred backend; “iptables” commands may actually be thin wrappers (iptables-nft) generating nft rules.
  • Docker’s “iptables=false” is not a free lunch; disabling Docker’s rule management breaks common networking behaviors unless you replace them yourself.
  • Container-to-container traffic on the same bridge is not “outbound”; it’s local L2/L3 inside the host and can bypass naive host firewall intent unless you filter it.

Fast diagnosis playbook

You want the fastest route from “port is exposed” to “I know exactly which chain allowed it”. Here’s the order that
finds the bottleneck quickly.

1) Confirm what is actually exposed (don’t trust Compose files)

  • Check published ports from Docker’s view (docker ps).
  • Check listening sockets on the host (ss -ltnp).
  • Probe from a remote system or a second NIC network namespace if you have one.

2) Identify the packet path

  • If it’s a published port, it likely hits DNAT in nat PREROUTING then filter FORWARD.
  • If it’s host networking (network_mode: host), it hits filter INPUT like any daemon.

3) Inspect the rule that wins

  • Check DOCKER-USER first (your control point).
  • Then check Docker’s own chains (DOCKER, DOCKER-FORWARD).
  • Then check UFW’s forwarding policy and whether UFW even sees the packet.

4) Fix with the smallest change that makes the security posture true

  • If you only need “reachable from LAN”, filter by source subnet in DOCKER-USER.
  • If you only need “reachable from reverse proxy container”, stop publishing the app port and use internal networks.
  • If you need “reachable only on localhost”, bind published ports to 127.0.0.1.

Target state: what “locked down” actually means

“Locked down containers” is vague. In practice, pick one of these sane target states and implement it explicitly:

  1. Default: nothing is published. Containers talk over private Compose networks. Only a reverse proxy
    (or a single gateway service) publishes 80/443.
  2. Selective publishing. A handful of ports are published, but only to specific source networks
    (corporate VPN, office IP ranges) and never to the whole internet unless that service is meant to be public.
  3. Localhost-only dev pattern on prod hosts (yes, sometimes you need it): publish to 127.0.0.1 and
    require SSH tunnel, VPN, or on-host access.
  4. Hard isolation between Docker bridges. Inter-container traffic is allowed only where you declare it.

What you should avoid: “default deny inbound” on UFW and then letting Docker publish ports freely. That’s a policy
contradiction with a predictable outcome.

Practical tasks (commands, outputs, decisions)

These are the tasks I actually run on Ubuntu 24.04 when I’m diagnosing or hardening UFW + Docker. Each one includes
the command, what typical output means, and the decision you make.

Task 1: Verify UFW state and default policies

cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

Meaning: “routed” controls forwarding (FORWARD). If it’s disabled, UFW may not be policing forwarded
traffic the way you assume.

Decision: If containers are exposed, you must address forwarding behavior (typically via DOCKER-USER),
not just INPUT rules.

Task 2: Confirm Docker is managing iptables/nft rules

cr0x@server:~$ sudo docker info --format '{{json .SecurityOptions}}'
["name=apparmor","name=seccomp,profile=builtin","name=cgroupns"]

Meaning: This doesn’t directly show iptables, but confirms a normal Docker environment. Next you check
the daemon config.

Decision: Inspect /etc/docker/daemon.json before assuming anything about Docker’s firewall behavior.

Task 3: Check Docker daemon iptables setting

cr0x@server:~$ sudo cat /etc/docker/daemon.json
{
  "iptables": true,
  "ip-forward": true
}

Meaning: Docker will program firewall rules. That’s the common/default situation.

Decision: Keep it. Turning it off is how people accidentally reinvent NAT at 2 a.m.

Task 4: List published ports as Docker sees them

cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Ports}}'
NAMES          PORTS
web            0.0.0.0:8080->80/tcp
db             5432/tcp
prometheus     127.0.0.1:9090->9090/tcp

Meaning: 0.0.0.0:8080 is world-reachable (subject to firewall). 127.0.0.1:9090
is local-only.

Decision: If a service doesn’t need public ingress, remove the published port or bind it to localhost.

Task 5: Verify host listening sockets (reality check)

cr0x@server:~$ sudo ss -ltnp | head -n 12
State  Recv-Q Send-Q Local Address:Port  Peer Address:Port Process
LISTEN 0      4096   0.0.0.0:8080       0.0.0.0:*     users:(("docker-proxy",pid=2214,fd=4))
LISTEN 0      4096   127.0.0.1:9090     0.0.0.0:*     users:(("docker-proxy",pid=2311,fd=4))
LISTEN 0      4096   0.0.0.0:22         0.0.0.0:*     users:(("sshd",pid=1042,fd=3))

Meaning: Docker-proxy (or kernel NAT) is accepting connections on the host. If it’s bound to 0.0.0.0,
your firewall must be correct or you’re exposed.

Decision: Treat 0.0.0.0 binds as “public until proven otherwise”.

Task 6: Identify which firewall backend you’re effectively using

cr0x@server:~$ sudo update-alternatives --display iptables | sed -n '1,12p'
iptables - auto mode
  link best version is /usr/sbin/iptables-nft
  link currently points to /usr/sbin/iptables-nft
  link iptables is /usr/sbin/iptables
  slave iptables-restore is /usr/sbin/iptables-restore
  slave iptables-save is /usr/sbin/iptables-save

Meaning: You’re using iptables-nft compatibility, which is fine. The important thing is consistency.

Decision: Avoid mixing raw nft rules that fight iptables-nft unless you fully own the ruleset.

Task 7: Inspect the DOCKER-USER chain (your policy hook)

cr0x@server:~$ sudo iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -j RETURN

Meaning: No restrictions are enforced before Docker’s accept rules.

Decision: This is where you add “deny by default, allow what’s needed” for published ports.

Task 8: Inspect FORWARD policy and Docker forwarding chains

cr0x@server:~$ sudo iptables -S FORWARD
-P FORWARD DROP
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-FORWARD

Meaning: Default DROP is good, but the jump to DOCKER-FORWARD means Docker can still allow specific flows.

Decision: Enforce your constraints in DOCKER-USER, before DOCKER-FORWARD accepts traffic.

Task 9: List Docker-created NAT rules for published ports

cr0x@server:~$ sudo iptables -t nat -S DOCKER | head -n 20
-N DOCKER
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.18.0.3:80
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 9090 -j DNAT --to-destination 172.18.0.5:9090

Meaning: DNAT is rewriting traffic destined to host port 8080 into the container. This is why INPUT rules
aren’t the whole story.

Decision: If you want to block public access, block it in filter FORWARD/DOCKER-USER based on source IP.

Task 10: Add a “default deny for Docker published ports” baseline in DOCKER-USER

cr0x@server:~$ sudo iptables -I DOCKER-USER 1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
cr0x@server:~$ sudo iptables -I DOCKER-USER 2 -i lo -j ACCEPT
cr0x@server:~$ sudo iptables -I DOCKER-USER 3 -s 10.0.0.0/8 -j ACCEPT
cr0x@server:~$ sudo iptables -I DOCKER-USER 4 -s 192.168.0.0/16 -j ACCEPT
cr0x@server:~$ sudo iptables -A DOCKER-USER -j DROP
cr0x@server:~$ sudo iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -i lo -j ACCEPT
-A DOCKER-USER -s 10.0.0.0/8 -j ACCEPT
-A DOCKER-USER -s 192.168.0.0/16 -j ACCEPT
-A DOCKER-USER -j RETURN
-A DOCKER-USER -j DROP

Meaning: The example output shows a common gotcha: Docker (or prior rules) may already include RETURN.
If RETURN appears before DROP, your drop won’t run.

Decision: Ensure the DROP is before any unconditional RETURN, or replace the chain content cleanly.
In practice, you want: allow established, allow trusted sources, then drop, then return (or just drop).

Task 11: Cleanly rewrite DOCKER-USER to avoid rule order surprises

cr0x@server:~$ sudo iptables -F DOCKER-USER
cr0x@server:~$ sudo iptables -A DOCKER-USER -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
cr0x@server:~$ sudo iptables -A DOCKER-USER -s 10.10.0.0/16 -j ACCEPT
cr0x@server:~$ sudo iptables -A DOCKER-USER -s 192.168.50.0/24 -j ACCEPT
cr0x@server:~$ sudo iptables -A DOCKER-USER -j DROP
cr0x@server:~$ sudo iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -s 10.10.0.0/16 -j ACCEPT
-A DOCKER-USER -s 192.168.50.0/24 -j ACCEPT
-A DOCKER-USER -j DROP

Meaning: Deterministic: if you’re not from allowed subnets and not established, you’re dropped before Docker accepts anything.

Decision: Use this as the baseline on hosts that should not publish arbitrary services to the public internet.

Task 12: Verify UFW isn’t silently allowing routed traffic

cr0x@server:~$ sudo grep -nE 'DEFAULT_FORWARD_POLICY|IPV6' /etc/default/ufw
7:DEFAULT_FORWARD_POLICY="DROP"
18:IPV6=yes

Meaning: Forward default is DROP. Good. IPv6 is enabled; if you ignore it, you can “secure” IPv4 and still leak on IPv6.

Decision: If you run IPv6 (you probably do, even if you pretend you don’t), duplicate policy for ip6tables/nft.

Task 13: Check UFW’s “before” rules for Docker interaction

cr0x@server:~$ sudo sed -n '1,140p' /etc/ufw/before.rules | sed -n '1,40p'
#
# rules.before
#
# Rules that should be run before the ufw command line added rules. Custom
# rules should be added to one of these chains:
#   ufw-before-input
#   ufw-before-output
#   ufw-before-forward
#

Meaning: UFW expects you to add forward-path policies in ufw-before-forward if needed.

Decision: Prefer DOCKER-USER for Docker-specific constraints. Use UFW before-forward for broader routing policy.

Task 14: Confirm Docker networks and bridge interfaces

cr0x@server:~$ docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
a1b2c3d4e5f6   bridge            bridge    local
b2c3d4e5f6a1   myapp_default     bridge    local
c3d4e5f6a1b2   host              host      local
d4e5f6a1b2c3   none              null      local

Meaning: Each user-defined bridge (like myapp_default) may have its own interface and isolation behavior.

Decision: If you’re trying to restrict east-west traffic, you need to think per bridge, not just docker0.

Task 15: Map a published port to a specific container IP (for precise rules)

cr0x@server:~$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web
172.18.0.3

Meaning: You can write a DOCKER-USER rule that only permits traffic to this container/port (useful for exceptions).

Decision: Prefer subnet-based policies; per-container IPs change. Use static IPs only when you truly need them.

Task 16: Test from an untrusted source and watch counters

cr0x@server:~$ sudo iptables -L DOCKER-USER -v -n
Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination
   40  2400 ACCEPT     all  --  *      *       10.10.0.0/16         0.0.0.0/0
   12   720 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0

Meaning: Counters increment. Your rules are actually seeing traffic. That’s the best feeling you can have with firewalls.

Decision: If counters don’t move, you’re filtering the wrong chain/hook (common when host networking is used).

Task 17: Persist rules across reboot (don’t rely on memory)

cr0x@server:~$ sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease
Reading package lists... Done
cr0x@server:~$ sudo apt-get install -y iptables-persistent
Reading package lists... Done
Building dependency tree... Done
Suggested packages:
  firewalld
The following NEW packages will be installed:
  iptables-persistent netfilter-persistent
Setting up iptables-persistent ...
Saving current rules to /etc/iptables/rules.v4...
Saving current rules to /etc/iptables/rules.v6...

Meaning: Your current v4/v6 rules are saved and restored on boot.

Decision: If you use DOCKER-USER rules, persist them. Otherwise the next reboot “fixes” your firewall back to insecure.

Task 18: Validate IPv6 exposure (the quiet footgun)

cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Ports}}' | sed -n '1,5p'
NAMES          PORTS
web            :::8080->80/tcp
prometheus     127.0.0.1:9090->9090/tcp

Meaning: :::8080 is IPv6-any. If you lock down only IPv4, you’re still reachable over IPv6.

Decision: Mirror DOCKER-USER policy in ip6tables (or ensure nft rules cover both families).

Joke #2: IPv6 is like that spare key under the doormat—everyone forgets it’s there until the wrong person finds it.

Compose patterns that don’t sabotage your firewall

1) Don’t publish what you don’t need

The cleanest firewall rule is the one you don’t need because the port isn’t exposed in the first place. Compose
makes it easy to publish everything during development and then forget you did it.

Prefer expose: (internal) over ports: (published). Yes, “expose” is mostly documentation plus network behavior
inside Docker, but the key idea is: internal services should only be reachable by peer containers.

2) Bind host ports to localhost for admin UIs

If you need Grafana/Prometheus/admin panels on a server, bind to 127.0.0.1 and access them through SSH tunnel or VPN.
This avoids playing whack-a-mole with firewall exceptions.

In Compose:
ports: ["127.0.0.1:9090:9090"].
That simple. It’s not “security through obscurity”; it’s refusing to route the traffic at all.

3) Use a reverse proxy as the only published edge

The best production pattern: publish 80/443 for a reverse proxy container (or host daemon), and keep everything else
on internal Docker networks. Then your firewall rules can be boring.

Add an “internal: true” network for backends. That tells Docker not to provide external connectivity through that network.
It won’t solve every case, but it nudges you toward a sane topology.

4) Avoid network_mode: host unless you mean it

Host networking bypasses most of Docker’s virtual networking and makes the container behave like a host process.
That changes which firewall hooks apply. It also makes port collisions fun.

Use it for performance-sensitive monitoring agents or specialized networking tools when you have a reason. Otherwise,
it’s a shortcut that becomes a liability during incident response.

5) Declare separate networks for “public edge” and “private backend”

Separate networks give you separation you can reason about. One network attached to the proxy and the app,
another attached only to backends. You can also restrict inter-network routing at the firewall level if needed.

The DOCKER-USER chain: your leverage point

Docker’s own docs introduced DOCKER-USER because operators needed a stable insertion point that Docker wouldn’t rewrite.
The chain is jumped from FORWARD early (depending on version), and it’s the right place to enforce “what sources are allowed
to reach published container ports”.

Think of it as the “policy layer” above Docker’s “plumbing layer”. Docker sets up plumbing so packets can get to containers.
You set policy so only the packets you want actually do.

What to put in DOCKER-USER

  • Allow established/related first (return traffic must work).
  • Allow from trusted source subnets (VPN, office, bastion range).
  • Optionally allow specific public services (e.g., 80/443 to proxy only).
  • Drop everything else that is forwarded to Docker networks.

What not to put in DOCKER-USER

  • Per-container IP rules unless you’ve pinned IPs and accepted the operational debt.
  • Rules that assume only docker0 exists (Compose creates multiple bridges).
  • Rules that drop without allowing established connections first (you’ll break outbound return paths).

UFW integration patterns that work on Ubuntu 24.04

There are two broad strategies that don’t end in tears:

  1. Keep UFW for host services; enforce container ingress in DOCKER-USER. This is my default recommendation.
    UFW remains the operator-friendly tool for SSH, node exporters, and so on. DOCKER-USER becomes the “container perimeter”.
  2. Push more policy into UFW’s forward chain (ufw-before-forward and friends). This can work, but you’re now
    debugging interactions between UFW-managed chains and Docker-managed chains. It’s doable; it’s just not the fastest path.

My opinionated production posture

Keep UFW’s default deny incoming. Allow SSH only from trusted sources (VPN or bastion). Publish only 80/443 publicly.
Everything else either:

  • is not published at all (internal Docker networks), or
  • is published to 127.0.0.1, or
  • is allowed only from private subnets via DOCKER-USER.

IPv6: decide, then enforce

On Ubuntu 24.04, IPv6 is not an edge case. If your server has AAAA, it’s reachable. If Docker publishes on ::,
you need a v6 story. That story can be “disable IPv6 everywhere” (heavy-handed, sometimes correct) or “filter it properly”
(more common in modern environments).

Three corporate mini-stories from the trenches

Incident caused by a wrong assumption: “deny incoming means deny incoming”

A mid-sized SaaS company moved a handful of internal tools onto a fresh Ubuntu 24.04 VM cluster. The platform team
had a standard: UFW enabled, default deny inbound, allow SSH from the VPN range. They used Docker Compose to deploy
an internal dashboard plus a database and a metrics stack. It looked tidy.

The wrong assumption was quiet and classic: they assumed UFW’s “deny incoming” would block anything reachable via the
host IP, including container ports. During a routine external scan (not even a dedicated pentest), someone noticed
the dashboard login page was reachable on a high port. It wasn’t meant to be public. It also wasn’t patched recently.

The first response was “but UFW is active; it can’t be exposed”. The second response was staring at docker ps and
seeing 0.0.0.0:PORT->CONTAINER. The third response was the uncomfortable one: they had built security posture
out of a UI abstraction rather than packet flow reality.

Fixing it was not heroic. They stopped publishing the dashboard port, put it behind the existing reverse proxy,
and enforced a DOCKER-USER allowlist for the few admin services that truly needed direct access from the VPN subnet.
The lesson that stuck: “incoming” and “forwarded” aren’t the same thing, and Docker lives in forwarded land.

Optimization that backfired: disabling Docker iptables management

Another org had a security review that didn’t like “applications modifying firewall rules”. That’s a reasonable instinct.
The team decided to set "iptables": false in Docker’s daemon config and to manage everything via UFW only.
They did this in staging, saw containers start, and called it a win.

The first backfire was subtle: outbound connectivity from containers became flaky in ways that were hard to correlate.
Some images pulled slowly, some webhooks timed out, and DNS intermittently failed depending on which node you landed on.
It wasn’t “down”; it was “weird”. Weird is expensive.

The second backfire was operational: every Compose project now required custom NAT and forwarding rules. Developers
didn’t know what ports were being routed where because it wasn’t expressed in the Compose file anymore. The firewall
rules became tribal knowledge. Changes took longer, and incident response got slower.

They eventually backed out the change. Docker resumed managing iptables/nft plumbing. The security team got what they
actually wanted—policy control—by enforcing restrictions in DOCKER-USER and using a standard template for allowed subnets
and public edge ports. “Don’t let Docker touch iptables” sounded clean. In practice, it replaced a common mechanism with
a bespoke one. That’s not security; that’s debt with a badge.

Boring but correct practice that saved the day: rule persistence and a reboot test

A regulated company ran quarterly maintenance windows where hosts were rebooted, kernels updated, and the usual set of
“this should be fine” changes rolled out. One team had a habit: after any firewall change, they would (1) persist rules,
(2) reboot a canary node, and (3) verify exposure from outside the subnet. Boring. Repetitive. Annoyingly correct.

During one window, they upgraded Docker and refreshed UFW policies. Everything looked fine—until the canary rebooted.
Their external check showed a service port open that should have been VPN-only. The team didn’t panic; they followed
their checklist and noticed the DOCKER-USER chain was present but empty after reboot. The rules hadn’t persisted.

Because they caught it on a canary, the impact was small: they reinstalled persistence tooling, saved v4 and v6 rules,
and validated again. The rest of the fleet followed with the corrected baseline. No incident. No customer email.
No “how did this pass review?” meeting.

The moral is not glamorous: if you don’t test a reboot, you don’t have a configuration. You have a mood.

Common mistakes: symptoms → root cause → fix

1) “UFW is enabled but the container port is still reachable”

Symptom: Remote clients can reach host:8080 even with default deny inbound.

Root cause: Traffic is DNAT’d and forwarded; UFW’s INPUT policy doesn’t apply. Docker’s rules accept it.

Fix: Add allowlist + drop policy in DOCKER-USER, or stop publishing the port.

2) “I added DOCKER-USER DROP but nothing changed”

Symptom: Counters don’t move; port still open.

Root cause: You’re using host networking, or your DROP rule is after an unconditional RETURN, or you’re filtering IPv4 only.

Fix: Check iptables -S DOCKER-USER for rule order; verify network_mode: host; mirror rules in ip6tables.

3) “After reboot, everything is exposed again”

Symptom: Policy works until a restart.

Root cause: DOCKER-USER rules weren’t persisted; only runtime state changed.

Fix: Use iptables-persistent or a systemd unit that restores rules before Docker starts.

4) “Only some users can connect; others time out”

Symptom: VPN users work, office users don’t (or vice versa).

Root cause: Source-based allowlist doesn’t include all real client subnets; NAT makes source appear different.

Fix: Validate client source IP at the server (tcpdump/conntrack), then expand allowlist deliberately.

5) “Inter-container traffic is blocked unexpectedly”

Symptom: App can’t reach DB though both are in the same Compose project.

Root cause: Over-broad DROP in DOCKER-USER without exceptions for internal bridge traffic.

Fix: Allow established/related, and if you drop by default, add explicit allows for internal subnets or interfaces before drop.

6) “IPv4 is locked down, but port scanners still see it open”

Symptom: External scan shows open ports despite IPv4 rules.

Root cause: IPv6 exposure (:::) with missing ip6tables/nft family policy.

Fix: Implement matching IPv6 policy, or disable IPv6 intentionally (host + Docker) and verify.

7) “Docker Compose updates break my firewall”

Symptom: After compose up, rules look different and access changes.

Root cause: Your constraints are in Docker-managed chains rather than DOCKER-USER, or you’re relying on interface names that change.

Fix: Keep policy in DOCKER-USER and use stable match criteria (source subnets, destination ports, conntrack state).

Checklists / step-by-step plan

Plan A (recommended): publish only edge ports, restrict everything else

  1. Inventory exposed ports: run docker ps and ss -ltnp; list anything bound to 0.0.0.0 or :::.
  2. Delete accidental publishing: remove ports: from internal services; use internal networks.
  3. Bind admin tools to localhost: use 127.0.0.1:PORT:PORT in Compose when appropriate.
  4. Decide your trusted source ranges: VPN subnets, office subnets, bastion IPs. Write them down.
  5. Enforce in DOCKER-USER: allow established/related, allow trusted ranges, drop the rest.
  6. Mirror for IPv6: add equivalent ip6tables rules or ensure nft covers both.
  7. Persist rules: install persistence tooling and validate a reboot.
  8. Test externally: validate from an untrusted network and from a trusted network.
  9. Monitor counters: check DOCKER-USER counters during tests; confirm your rules are actually in the path.

Plan B: UFW-centric forwarding policy (only if you enjoy tracing chains)

  1. Set UFW forward policy to DROP (already common) and ensure routed filtering is enabled as desired.
  2. Add explicit forward allows for Docker bridges and published services in UFW before-forward chains.
  3. Verify Docker isn’t inserting accept rules that bypass your intent (you’ll still end up back at DOCKER-USER).

Plan C: “Nothing ever publishes” (for internal platforms)

  1. Enforce a CI check that rejects Compose files with ports: unless approved.
  2. Require ingress via a standardized reverse proxy layer and internal service discovery.
  3. Drop forwarded traffic to Docker bridges from non-trusted sources universally.

FAQ

1) Why doesn’t UFW block Docker published ports by default?

Because published container traffic is typically forwarded after NAT, and UFW’s “deny incoming” posture mostly governs
INPUT. Docker installs forwarding/NAT rules to guarantee port publishing works.

2) Should I disable Docker’s iptables management?

No, not as a first-line security move. It replaces a standard, well-understood mechanism with manual NAT and forwarding
rules you now own forever. Use DOCKER-USER to enforce policy while Docker keeps plumbing working.

3) What’s the single best fix that doesn’t break Compose?

Add allowlist + drop rules in DOCKER-USER, then remove unnecessary ports: from Compose. This preserves Docker’s
networking while preventing “surprise internet exposure”.

4) Will DOCKER-USER rules survive Docker restarts?

They typically survive Docker daemon restarts because the chain is meant for user policy, but they won’t necessarily
survive host reboot unless you persist them. Persist rules explicitly.

5) How do I restrict a published port to LAN only?

Allow the LAN subnet(s) in DOCKER-USER and drop everything else. Alternatively, bind the published port to a specific
interface IP that only exists on the LAN, but source-based filtering is usually clearer.

6) What about containers that must be public (like a web app)?

Publish only the reverse proxy (80/443) publicly. Keep app containers un-published on internal networks. If you must
publish the app directly, allow only those ports from 0.0.0.0/0 and keep everything else dropped.

7) Does this change anything for host-network containers?

Yes. Host-network containers behave like host processes; traffic hits INPUT, not FORWARD. For those, UFW rules are the
right control point, not DOCKER-USER.

8) How do I know if IPv6 is exposing my containers?

Look for ::: in docker ps port listings or in ss -ltnp. Then test from an IPv6-capable external host and
confirm your ip6tables/nft policy matches IPv4 intent.

9) Can I do all this purely in nftables?

You can, but if Docker is using iptables-nft compatibility, you’ll want to avoid conflicting rule management.
The pragmatic approach on Ubuntu is: keep Docker’s iptables-nft behavior and enforce policy in DOCKER-USER (and v6 equivalent).

10) What’s the cleanest way to avoid per-container firewall rules?

Don’t publish ports for internal services. Use internal Docker networks plus one edge proxy. Then your firewall is mostly
“allow 80/443, allow SSH from VPN, drop the rest”, with DOCKER-USER enforcing that containers don’t bypass it.

Conclusion: next steps that stick

The reliable way to secure Docker on Ubuntu 24.04 is not to fight Docker’s networking. Let Docker do plumbing. You do
policy. Put your policy where it actually matters: in the forwarded path, before Docker’s accepts, via DOCKER-USER.

If you want a practical next hour:

  1. Run the inventory tasks: docker ps, ss -ltnp, and check IPv6 bindings.
  2. Remove accidental ports: in Compose; replace with internal networks or localhost binds.
  3. Implement a DOCKER-USER allowlist for trusted source ranges, then a default drop.
  4. Persist v4 and v6 rules and verify with a reboot on a canary host.
  5. Write down your intended exposure model (public edge vs VPN-only vs internal-only) so the next engineer doesn’t reintroduce “surprise internet”.

You’ll end up with something rare in container land: a firewall posture that matches what you think you deployed. That’s
not just security. That’s operational sanity.

← Previous
Email: SPF includes are a mess — how to simplify without breaking mail
Next →
Socket churn: when platforms become upgrade traps

Leave a comment