Docker Port Published but Unreachable: The Real Checklist (Not Guesswork)

Was this helpful?

You ran the container. You published the port. docker ps smugly prints 0.0.0.0:8080->80/tcp.
And yet your browser times out like it’s waiting for a bus in the rain.

“Port published but unreachable” is one of those failures that makes smart people do dumb things: restart Docker, toggle random firewall rules,
rebuild images, and whisper threats at NAT. Stop that. This is a deterministic system. If a published port can’t be reached,
something specific is blocking the packet or the application isn’t listening where you think it is.

The mental model: what “published” really means

When you publish a port with Docker (-p 8080:80), you are not “opening a port inside the container.”
You are asking the Docker engine to arrange traffic so that packets arriving at the host’s port (8080) are forwarded to the container’s port (80).
That forwarding can be implemented in a few ways depending on platform and mode:

  • Linux rootful Docker (classic): iptables/nftables NAT (DNAT) rules plus a tiny userland proxy in some scenarios.
  • Linux rootless Docker: often uses slirp4netns / user-space forwarding, with different constraints and performance traits.
  • Docker Desktop (Mac/Windows): there’s a VM, and port forwarding crosses the host ↔ VM boundary, with an extra layer of “fun.”

“Published” means Docker created intent. It does not guarantee the packet will survive:
host firewall, cloud security groups, wrong interface binding, missing route, reverse proxy mismatch, or a process listening only on 127.0.0.1
can all produce the same symptom: a port that looks open but behaves like a brick wall.

The diagnostic trick is to stop treating it as “Docker networking” and start treating it as a path problem:
client → network → host interface → firewall → NAT → container veth → container process.
Find the first place where reality diverges from expectation.

One paraphrased idea from Werner Vogels (Amazon CTO): everything fails eventually; design so you can detect, isolate, and recover quickly.
Published ports are no different—instrument the path and the truth pops out.

Fast diagnosis playbook (first/second/third)

First: confirm the service is actually listening (inside the container)

If the app isn’t listening on the port you mapped, Docker can forward packets all day and you’ll still get timeouts or resets.
Do not start at iptables. Start at the process.

Second: validate the host is listening and forwarding (on the right interface)

Check that the host port is bound, which interface it’s bound to, and whether Docker inserted the expected NAT rules.
If the host isn’t listening, or it’s only listening on 127.0.0.1, remote clients won’t connect.

Third: eliminate “external blockers” (firewalls, cloud security groups, routing)

If localhost works but remote doesn’t, stop blaming Docker. That’s a perimeter problem: UFW/firewalld/nftables policy, cloud security group,
host routing, or a load balancer health check hitting the wrong place.

Fourth: check special modes and edge features

Rootless Docker, IPv6, host networking, macvlan, overlay networks, reverse proxies, and hairpin NAT each come with sharp edges.
If you’re in one of those, be explicit about it and follow the relevant branch in the checklist below.

Joke #1: NAT is like office politics—everyone says they understand it, and then you watch them blame the printer.

Practical tasks: commands, outputs, decisions (12+)

These are production-grade checks. Each includes a command, typical output, what it means, and the decision you make next.
Run them in order until you find the first “wrong” thing. That’s the point where you stop and fix.

Task 1: Confirm the mapping Docker thinks it created

cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Ports}}'
NAMES          PORTS
web-1          0.0.0.0:8080->80/tcp, [::]:8080->80/tcp
db-1           5432/tcp

Meaning: Docker claims it published 8080 on all IPv4 interfaces and IPv6 as well.
If you only see 127.0.0.1:8080->80/tcp, it’s bound to localhost and remote connections will fail.

Decision: If mapping looks wrong, fix the run/compose config first. If it looks right, continue.

Task 2: Inspect the container port bindings (ground truth)

cr0x@server:~$ docker inspect -f '{{json .NetworkSettings.Ports}}' web-1
{"80/tcp":[{"HostIp":"0.0.0.0","HostPort":"8080"}]}

Meaning: Docker’s container config says: host 0.0.0.0:8080 forwards to container 80/tcp.
If null appears for the port, it isn’t published.

Decision: If the binding is not what you expect, redeploy with correct -p or compose ports:.

Task 3: Verify the service is listening inside the container

cr0x@server:~$ docker exec -it web-1 sh -lc "ss -lntp | sed -n '1,6p'"
State  Recv-Q Send-Q Local Address:Port  Peer Address:Port Process
LISTEN 0      4096   0.0.0.0:80         0.0.0.0:*     users:(("nginx",pid=1,fd=6))

Meaning: Something (nginx) is listening on 0.0.0.0:80 inside the container.
If you see 127.0.0.1:80 only, it will still usually work because the traffic arrives locally inside the container,
but some apps bind oddly with IPv6-only or unix sockets.

Decision: If nothing is listening, fix the app/container (wrong command, crash loop, config).
If it is, continue.

Task 4: Curl the service from inside the container

cr0x@server:~$ docker exec -it web-1 sh -lc "apk add --no-cache curl >/dev/null 2>&1; curl -sS -D- http://127.0.0.1:80/ | head"
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Tue, 02 Jan 2026 14:01:12 GMT
Content-Type: text/html

Meaning: The app responds locally. If this fails, stop. Publishing ports won’t fix a broken app.

Decision: If internal curl fails, diagnose the application. If it succeeds, move outward.

Task 5: Curl via the container IP from the host

cr0x@server:~$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web-1
172.17.0.4
cr0x@server:~$ curl -sS -D- http://172.17.0.4:80/ | head
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Tue, 02 Jan 2026 14:01:20 GMT
Content-Type: text/html

Meaning: Host can reach container over the bridge network. If this fails, the issue is inside the host/container networking
(bridge down, policy routing, DOCKER-USER chain, security modules, or the container attached to a different network).

Decision: If host → container IP fails, inspect docker network, iptables, and host policies. If it succeeds, check publishing path.

Task 6: Curl via the published host port from the host

cr0x@server:~$ curl -sS -D- http://127.0.0.1:8080/ | head
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Tue, 02 Jan 2026 14:01:28 GMT
Content-Type: text/html

Meaning: The port-forwarding works locally. If localhost works but remote doesn’t, you’re now in firewall/interface territory.

Decision: If localhost fails, inspect host listening and NAT rules next. If localhost works, jump to perimeter checks.

Task 7: See what is actually listening on the host port

cr0x@server:~$ sudo ss -lntp | grep ':8080'
LISTEN 0      4096         0.0.0.0:8080      0.0.0.0:*    users:(("docker-proxy",pid=2314,fd=4))

Meaning: The host has a listener, often docker-proxy. On newer setups, you might not see docker-proxy,
because DNAT is enough; then ss may show nothing even though it works.

Decision: If you see 127.0.0.1:8080 only, fix binding (compose ports, or explicit host IP).
If you see nothing and it fails, check NAT rules and Docker daemon config.

Task 8: Confirm Docker inserted NAT rules (iptables legacy view)

cr0x@server:~$ sudo iptables -t nat -S DOCKER | sed -n '1,6p'
-N DOCKER
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.4:80

Meaning: Packets arriving at host TCP/8080 from non-docker0 interfaces are DNATed to the container IP/80.
If the rule is missing, Docker isn’t programming NAT (common with rootless mode, custom daemon flags, or nftables mismatches).

Decision: If rules are missing or incorrect, fix Docker’s networking backend, or redeploy Docker with proper iptables integration.

Task 9: Check the DOCKER-USER chain (the “you blocked yourself” chain)

cr0x@server:~$ sudo iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -j DROP

Meaning: This host drops forwarded traffic before Docker’s own rules. It will make published ports unreachable from the network
while localhost might still work (depending on path).

Decision: Replace blanket drops with explicit allow rules, or move policy to a controlled firewall layer that accounts for Docker.

Task 10: If using nftables, inspect the ruleset (modern view)

cr0x@server:~$ sudo nft list ruleset | sed -n '1,40p'
table ip nat {
  chain PREROUTING {
    type nat hook prerouting priority dstnat; policy accept;
    tcp dport 8080 dnat to 172.17.0.4:80
  }
  chain OUTPUT {
    type nat hook output priority -100; policy accept;
    tcp dport 8080 dnat to 172.17.0.4:80
  }
}

Meaning: DNAT exists in prerouting and output (host-local connections).
If the rule exists only in OUTPUT, remote traffic won’t get forwarded; if only in PREROUTING, localhost behavior might differ.

Decision: Ensure Docker is correctly integrated with nftables, and you aren’t mixing incompatible iptables backends.

Task 11: Verify kernel forwarding and bridge netfilter settings

cr0x@server:~$ sysctl net.ipv4.ip_forward net.bridge.bridge-nf-call-iptables 2>/dev/null
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1

Meaning: Forwarding is enabled, and bridge traffic is visible to iptables.
Some hardened baselines disable these, then wonder why containers can’t be reached.

Decision: If ip_forward=0 and you expect routing/NAT, enable it (and document it in your baseline).

Task 12: Check UFW status and whether it’s silently blocking Docker

cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

Meaning: “deny (routed)” is the classic container-killer if you rely on forwarded traffic.
UFW can block docker bridge forwarding even when you allow the host port.

Decision: Either configure UFW to allow routed traffic for Docker networks, or manage firewall with explicit iptables/nft rules.

Task 13: Check firewalld zones and masquerade (common on RHEL/CentOS)

cr0x@server:~$ sudo firewall-cmd --state
running
cr0x@server:~$ sudo firewall-cmd --get-active-zones
public
  interfaces: eth0
docker
  interfaces: docker0

Meaning: firewalld can place docker0 into its own zone. If that zone disallows forwarding/masquerade, published ports break.

Decision: Ensure docker zone settings allow forwarding as required, or unify zones intentionally.

Task 14: Test from a remote host and compare path behavior

cr0x@server:~$ curl -sS -m 2 -D- http://$(hostname -I | awk '{print $1}'):8080/ | head
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Tue, 02 Jan 2026 14:01:39 GMT
Content-Type: text/html

Meaning: This simulates “not localhost” by using the host IP. If this fails but 127.0.0.1 works,
your binding/firewall/route differs between loopback and external interface.

Decision: If it fails, inspect binding IP and firewall per interface. If it succeeds, the issue might be outside the host (cloud SG, LB, client route).

Task 15: Packet capture on the host to see if SYN arrives

cr0x@server:~$ sudo tcpdump -ni eth0 tcp port 8080 -c 5
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:02:01.123456 IP 203.0.113.10.51922 > 192.0.2.20.8080: Flags [S], seq 123456789, win 64240, options [mss 1460,sackOK,TS val 1 ecr 0,nop,wscale 7], length 0

Meaning: If you see SYNs arriving, the network path to the host is fine. If you see nothing, the problem is upstream
(security group, NACL, router, load balancer, DNS pointing elsewhere).

Decision: No SYN: stop debugging Docker and go outward. SYN arrives: keep debugging host firewall/NAT/app response.

Task 16: Packet capture on docker0 to confirm forwarding happens

cr0x@server:~$ sudo tcpdump -ni docker0 tcp port 80 -c 5
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:02:01.124001 IP 203.0.113.10.51922 > 172.17.0.4.80: Flags [S], seq 123456789, win 64240, options [mss 1460,sackOK,TS val 1 ecr 0,nop,wscale 7], length 0

Meaning: The packet got DNATed and reached docker0. If it arrives on eth0 but not on docker0,
your NAT/forwarding rules are the bottleneck.

Decision: Fix iptables/nftables, forwarding settings, or DOCKER-USER chain rules.

Where it breaks: the real failure modes

1) The app is listening on the wrong interface or port

Many frameworks default to 127.0.0.1 for “safety.” That’s great on laptops. In containers, it’s a common self-own.
Node, Python dev servers, and some Java microframeworks are frequent offenders.

What you see: container is “healthy,” port is published, but connections hang or get reset. Inside-container curl might work only via localhost,
or not at all if it binds to a UNIX socket.

What to do: force the app to bind to 0.0.0.0 (or to the container’s interface explicitly) and confirm with ss -lntp.
If you’re using a dev server, stop. Use a real server (gunicorn/uvicorn, nginx, etc.) for anything outside your laptop.

2) The port is published to localhost only

Compose and docker run allow bindings like 127.0.0.1:8080:80. That means exactly what it says.
It works from the host, fails from the network, and burns hours because “it works for me” is a powerful sedative.

Fix: bind to 0.0.0.0 or the specific external interface IP, intentionally.
Make the binding explicit in Compose when you mean it.

3) Host firewall blocks forwarded traffic (UFW/firewalld) even if the port looks open

A subtle point: a “published port” is often implemented with NAT + forwarding.
Firewalls can allow INPUT to TCP/8080 yet block FORWARD to docker0, resulting in timeouts.
Locally, it may still work because localhost traffic might take a different chain/hook.

If you’re running UFW with “deny routed,” assume it’s involved until proven otherwise.
If you’re running firewalld, assume zones/masquerade matter.

4) DOCKER-USER chain blocks it (intentionally or accidentally)

DOCKER-USER exists specifically so you can insert policy ahead of Docker’s own chains. That’s good engineering.
It’s also where “temporary” drops live for years.

A single -j DROP in DOCKER-USER can blackhole your published ports. Don’t sprinkle global drops unless you also sprinkle global documentation.

5) You’re mixing iptables backends (legacy vs nft) and Docker is programming the “wrong” one

On some distros, iptables is a compatibility wrapper over nftables, and you can end up with rules inserted into one view
while packets are evaluated by the other, depending on kernel/userspace versions and configuration.

Symptom: Docker claims ports are published; rules appear in iptables -t nat -S but don’t affect traffic,
or rules appear in nft but iptables output looks empty.

Fix: choose a consistent backend and configure Docker accordingly. Also: stop treating the firewall stack as a choose-your-own-adventure novel.

6) Rootless Docker: different forwarding mechanics, different surprises

Rootless Docker avoids privileged networking. Great for security posture, less great for “it should behave like rootful Docker.”
Published ports are implemented via user-space forwarding; performance, binding behavior, and firewall interactions differ.

Symptom: ports work only on localhost, or only for high ports, or fail when binding to specific addresses.
The rules won’t show up in system iptables because rootless isn’t programming them.

Fix: confirm you’re rootless, then follow rootless-specific guidance (like explicit port forwarding setup).
If you need classic NAT behavior, run rootful Docker on hardened hosts instead of half-pretending.

7) Reverse proxy mismatch (wrong upstream, wrong network, wrong TLS expectations)

You publish a port, but traffic actually comes through nginx/HAProxy/Traefik on the host or in another container.
Your problem might be the proxy talking to the wrong container IP, wrong network, or expecting TLS on plain HTTP (or vice versa).

Symptom: container works via direct curl, but proxy returns 502/504.
People mislabel this as “port unreachable” because the user-facing symptom is “site down.”

Fix: test the proxy’s upstream connectivity from the proxy context, not from your laptop’s feelings.

8) Hairpin NAT / “connect to my public IP from inside the same host”

You’re on the host and you curl the host’s public IP:8080 and it fails, but localhost works.
That’s often hairpin NAT. Some network setups (especially with strict rp_filter or certain cloud routing)
treat this path differently.

Fix: test on the correct interface and understand whether you’re traversing an external router/LB and back.
If you need hairpin, configure it explicitly (or don’t depend on it).

9) IPv6 is “published” but not actually reachable

Docker can show [::]:8080 bindings. That doesn’t mean your host has IPv6 connectivity,
that your firewall allows it, or that the container path is IPv6-ready.

Symptom: IPv4 works; IPv6 times out. Or clients prefer IPv6 and fail even though IPv4 would work.

Fix: confirm IPv6 routing and firewall rules, and be explicit about whether you support IPv6. Accidental IPv6 is a hobby, not a strategy.

Common mistakes: symptom → root cause → fix

1) “It works on localhost but not from another machine”

Root cause: port bound to 127.0.0.1 only, or firewall allows local but blocks external, or cloud security group blocks.

Fix: publish to 0.0.0.0 (or correct interface) and open the port at the right layer (host firewall + cloud SG).

2) “docker ps shows 0.0.0.0:PORT, but ss shows nothing listening”

Root cause: relying on iptables DNAT without userland proxy; ss won’t show a listener even though NAT works.

Fix: test with curl 127.0.0.1:PORT. If it fails, inspect iptables/nft rules. Don’t treat ss output as the only truth.

3) “Connection refused immediately”

Root cause: no process listening inside the container, or wrong container port, or container crashed and got replaced.

Fix: docker exec ss -lntp and docker logs. Confirm the app binds the expected port.

4) “Timeout (SYN sent, no SYN-ACK)”

Root cause: packet dropped by firewall, security group, DOCKER-USER, or routing/NACL.

Fix: tcpdump on host external interface. If SYN never arrives, it’s upstream. If it arrives, inspect firewall/NAT on host.

5) “Works from host to container IP, but not via published port”

Root cause: NAT/forwarding not programmed or blocked; iptables backend mismatch; DOCKER-USER chain policy.

Fix: inspect NAT rules, DOCKER-USER, forwarding sysctls. Make firewall policy explicit and tested.

6) “Only some clients can reach it (others time out)”

Root cause: MTU issues (VPNs), asymmetric routing, IPv6 preference, or split-horizon DNS pointing to different IPs.

Fix: capture packets; test with forced IPv4/IPv6; verify MTU and routes; don’t assume the network is one uniform blob.

7) “It broke after enabling UFW/firewalld hardening”

Root cause: routed/forwarded traffic blocked; docker0 placed into restrictive zone.

Fix: allow forwarding for Docker networks explicitly, or implement Docker-aware firewall rules in DOCKER-USER with care.

8) “Reverse proxy returns 502 but direct port works”

Root cause: proxy upstream points to wrong container IP/network, wrong protocol (HTTP vs HTTPS), or DNS resolves differently inside the proxy container.

Fix: test connectivity from the proxy container/host, verify upstream targets, and make the proxy and app share the correct Docker network.

Checklists / step-by-step plan (do this, not vibes)

Checklist A: You’re on a Linux host and the port is dead from everywhere

  1. Inside container: confirm process listening on expected port with ss -lntp.
  2. Inside container: curl 127.0.0.1:PORT (or equivalent) to validate application response.
  3. Host to container IP: curl CONTAINER_IP:PORT. If this fails, fix docker networking or app.
  4. Host via published port: curl 127.0.0.1:PUBLISHED. If this fails but previous works, it’s NAT/forwarding.
  5. NAT rules: inspect iptables -t nat -S DOCKER or nft list ruleset.
  6. Policy chains: inspect iptables -S DOCKER-USER and any global FORWARD policies.
  7. Kernel settings: check net.ipv4.ip_forward and bridge netfilter settings.
  8. Only then: restart Docker if you changed rule backends or daemon config. Don’t reboot as a diagnostic tool.

Checklist B: Localhost works, remote fails

  1. Binding: verify it’s not 127.0.0.1:PUBLISHED in docker ps / inspect.
  2. Test “external-ish” locally: curl the host IP instead of localhost.
  3. Firewall: check UFW/firewalld policies for routed/forwarded traffic.
  4. Packet presence: run tcpdump on the external interface during a remote test.
  5. Cloud perimeter: confirm security groups/NACL/load balancer listeners target the correct host/port.
  6. DNS sanity: ensure clients resolve to the right IP (no stale record or split-horizon surprise).

Checklist C: It’s “reachable” but the app is wrong (502/redirect loops/SSL weirdness)

  1. Direct test: curl the published port directly on the host. Get a clean 200 (or expected response).
  2. Proxy path: test from the proxy context (container or host) to the upstream target.
  3. Protocol: verify HTTP vs HTTPS expectations. Don’t speak TLS to a plain port.
  4. Headers: confirm Host and X-Forwarded-Proto are set correctly if the app uses them.
  5. Network: ensure proxy and app share the same docker network if you’re using container-to-container DNS names.

Joke #2: If your fix is “restart everything,” you didn’t fix it—you rolled the dice and called it engineering.

Three corporate-world mini-stories

Mini-story 1: The incident caused by a wrong assumption

A team migrated a small internal service from VMs to Docker on a hardened Linux baseline. The deployment was clean: container ran, health checks passed,
and the port was published. The on-call engineer verified curl 127.0.0.1:PORT on the host and saw the expected response. Ship it.

Ten minutes later, real users couldn’t reach it. The error wasn’t a 500; it was nothing—timeouts. That triggered the predictable ritual:
redeploy, restart Docker, rebuild the image, and finally “maybe it’s the network.” Meanwhile, the load balancer marked the service unhealthy and drained it.

The wrong assumption was subtle: “If localhost works, the network should work.” On this host, UFW was configured with default incoming deny (fine),
and default routed deny (not fine for Docker port forwarding). Localhost requests never exercised the same forwarding policy as external requests.
So the team had proven the application worked, but not the published path.

The fix was boring and effective: a documented firewall policy allowing routed traffic to the specific container subnet and ports,
plus a runbook step requiring a remote curl test (from a bastion in the same network) before declaring victory.
After that, this class of outage mostly disappeared. Not because Docker became nicer—because the team stopped assuming.

Mini-story 2: The optimization that backfired

Another team wanted “maximum performance” and removed anything that looked like overhead. They disabled the userland proxy in Docker daemon settings,
tightened firewall rules, and consolidated iptables management under a host security agent. In testing, everything seemed faster and cleaner.
Benchmarks looked great. The slides were immaculate.

Then production happened. A subset of connections to published ports started failing intermittently—mostly from specific subnets.
The failures weren’t consistent enough to be obvious, but consistent enough to ruin someone’s day. The incident bounced between “network” and “platform”
for longer than anyone wants to admit.

The root was an interaction between the host security agent’s rule refresh cycle and Docker’s dynamic NAT rules.
Every time containers were replaced, Docker would program DNAT; the agent would later reconcile to its “desired state” and remove what it didn’t recognize.
Because the proxy was disabled, there was no fallback listener path—only NAT rules. Some connections hit during windows when the rules were missing.

The recovery was to stop treating iptables as a shared toy. The team moved to an explicit policy: either the security agent owned firewall state with
Docker-aware integration, or Docker owned NAT plus a controlled DOCKER-USER policy. They chose the latter for simplicity.
Performance stayed good. Reliability improved dramatically. The optimization wasn’t wrong; the ownership model was.

Mini-story 3: The boring but correct practice that saved the day

A platform group ran a fleet of Docker hosts behind internal load balancers. They had a strict practice: every service had a standard “connectivity probe”
executed from three places—inside the container, from the host, and from a remote canary node in the same network segment.
It was mundane. It was also written down, automated, and enforced during incident response.

One afternoon, a routine kernel update was followed by a wave of “service unreachable” pages. Panic started brewing in the usual Slack channels.
But the on-call followed the probes. Inside-container curl worked. Host to container IP worked. Host to published port worked. Remote canary failed.
That narrowed the problem in minutes: it wasn’t Docker, and it wasn’t the app. It was inbound network reachability.

A network change had altered which subnets were permitted to reach the hosts, and the load balancer’s health checks were now sourced from a subnet
that wasn’t on the allowlist. The platform group had the packet captures to prove SYNs never hit the host interface.
The fix was a clean perimeter allowlist update.

The practice that saved the day wasn’t a clever tuning knob. It was a disciplined, repeatable test from multiple vantage points,
with expectations written down. It prevented a container debugging spiral and shortened the incident.
Boring is a feature when you’re on call.

Interesting facts & short history (so you stop guessing)

  • Docker’s original port publishing on Linux leaned heavily on iptables NAT rules because it was widely available and fast in-kernel.
  • The “userland proxy” existed to handle edge cases (like hairpin connections and some localhost behaviors) when pure DNAT wasn’t sufficient.
  • DOCKER-USER chain was introduced so operators could enforce policy ahead of Docker’s automatically managed rules without fighting Docker updates.
  • iptables has two “worlds” on modern Linux: legacy xtables and nftables backend. Mixing them can produce rules you can see but the kernel doesn’t use (for your traffic path).
  • UFW’s default “deny routed” is reasonable for non-container hosts but often breaks container forwarding unless you explicitly allow it.
  • Rootless Docker became popular as security teams pushed least privilege, but it intentionally changes how networking is implemented and observed.
  • Docker Desktop on Mac/Windows always involves a VM boundary, so published ports are a port-forwarding feature across virtualization, not just a local NAT rule.
  • IPv6 behavior is frequently “accidentally on” because bindings may show [::] even when the environment doesn’t truly support end-to-end IPv6 reachability.

The takeaway from the history: the behavior you see is a product of platform decisions, security posture, and kernel/firewall stack evolution.
Treat your environment as a real system with layers, not as a magic Docker bubble.

FAQ

1) Why does docker ps show the port published if it doesn’t work?

Because it’s a configuration view, not a connectivity test. Docker recorded the binding and likely attempted to program forwarding,
but firewalls, routing, or the application can still block actual traffic.

2) How do I tell if the problem is the app or the network?

Use the three-hop test: inside container (curl localhost), host to container IP, host to published port. The first failure hop is your layer.

3) Why does localhost work but the host’s LAN IP doesn’t?

Different paths. Localhost may hit OUTPUT rules or local DNAT behavior; external traffic hits PREROUTING/FORWARD and is subject to different firewall policy.
Also check whether you accidentally bound to 127.0.0.1.

4) Do I need to open the port in the container firewall?

Usually no. Most containers don’t run a firewall. If you do, treat it as a real host: allow inbound to the app port.
But most “unreachable published port” issues are host/perimeter, not container firewall.

5) Why does ss -lntp sometimes not show a listener for a published port?

Because NAT-based publishing doesn’t require a listening process on the host port. The kernel rewrites and forwards packets.
If userland proxy is used, you’ll see docker-proxy.

6) Can Docker publishing fail because of an iptables/nftables mismatch?

Yes. If Docker programs rules into a backend that isn’t effectively used for your traffic, you get “rules exist” but no forwarding.
Check both iptables and nft views and standardize the stack.

7) What changes in rootless Docker?

Rootless can’t freely program system NAT rules. Port publishing typically relies on user-space forwarding mechanisms.
Observability (iptables inspection) and behavior (binding constraints, performance) differ. Confirm mode first.

8) How do I debug if the host is behind a load balancer?

Capture packets on the host interface while the LB probes. If SYNs never arrive, it’s LB config, security group rules, or routing.
If SYNs arrive but don’t reach docker0, it’s host firewall/NAT.

9) Why does it work over IPv4 but fail over IPv6?

Because IPv6 reachability requires end-to-end routing and firewall rules. Docker showing [::] doesn’t ensure your network supports it.
Test explicitly with IPv4/IPv6 and configure intentionally.

10) Should I use --network host to “avoid Docker networking issues”?

Only if you understand the tradeoffs. Host networking removes the NAT layer but increases port collision risk and reduces isolation.
It’s a tool, not a bandage for unknown problems.

Conclusion: next steps that actually prevent repeats

When a Docker port is published but unreachable, the system is telling you exactly where it’s broken—you just have to interrogate it in the right order.
Start inside the container, move outward, and stop as soon as you find the first failed hop. That’s the bottleneck. Fix that, not your patience.

Do these next

  1. Codify the three-hop test (container localhost → host container IP → host published port → remote canary) into your runbooks.
  2. Standardize firewall ownership: either Docker owns NAT and you use DOCKER-USER intentionally, or your firewall manager integrates with Docker. No shared mystery meat.
  3. Make bindings explicit in Compose (0.0.0.0:PORT:PORT vs 127.0.0.1) so you don’t ship “works on my host.”
  4. Instrument reachability: a simple blackbox check from a remote node catches most of these issues before users do.
  5. Document special modes (rootless, IPv6, reverse proxies, load balancers) next to the service, not in someone’s head.

Your goal isn’t to memorize Docker networking trivia. Your goal is to reduce time-to-truth. The checklist above does that—reliably, repeatedly,
and without requiring a reboot offering to the networking gods.

← Previous
Proxmox Backup Server vs Veeam for VMware: what’s better for fast restores and simple ops
Next →
Office VPN + RDP: Secure Remote Desktop Without Exposing RDP to the Internet

Leave a comment