You ship a container. You publish a port. Everything works in staging. Then someone runs a scan and finds your admin UI reachable from the public internet—over IPv6—while your IPv4 firewall looks spotless.
This failure mode is common, subtle, and humiliating in exactly the way production incidents prefer: the config is “right,” the intent is “secure,” and the traffic still lands. Let’s make that stop.
What an “IPv6 leak” really is (in Docker terms)
When people say “Docker IPv6 leak,” they usually mean one of three things:
-
Published ports are reachable via IPv6 even though the operator only considered IPv4.
Example: you did-p 8080:80, assumed it binds to0.0.0.0only, and forgot that on some systems it also binds to::(all IPv6 addresses). Now the service is internet-facing for anyone who can reach your host’s global IPv6 address. -
Firewalling covers IPv4 but not IPv6.
You hardened iptables but left ip6tables/nftables v6 path permissive. The result is a split-brain security posture: “blocked” in one protocol family, wide open in the other. -
Docker’s NAT/forwarding behavior differs for IPv6.
IPv4 Docker networking often leans on NAT and well-worn iptables patterns. IPv6, by design, expects routability. If you don’t explicitly filter forwarding and input for IPv6, you can accidentally give containers globally reachable addresses or allow inbound forwarding you didn’t intend.
The term “leak” is emotionally accurate: it feels like data seeped through a seam you didn’t know existed. Technically, it’s not a leak. It’s a reachable socket, created by default behaviors and the absence of explicit policy.
Why it happens: the mechanics that betray you
1) Binding semantics: 0.0.0.0 is not ::, and “all interfaces” is ambiguous
In Docker, -p 8080:80 means “publish this container port on the host.” On many setups Docker publishes on all host interfaces by default. On dual-stack systems, “all interfaces” can include IPv6.
Whether a given port appears on IPv6 depends on the kernel’s dual-stack behavior, Docker’s proxy mode, and how your distro treats net.ipv6.bindv6only. Some apps bind only v4; others bind dual-stack by default. Docker can publish via iptables rules and/or a userland proxy depending on version and configuration.
2) Your firewall policy is only as good as the family you enforce
If you’re using iptables rules as your “security boundary,” you need to remember there are two tablesets: IPv4 and IPv6. Or, in modern deployments, nftables where you still must ensure your rules cover ip6 as well as ip.
The classic facepalm: UFW configured and “active,” but IPv6 disabled inside UFW (or allowed by default), while the host has a global IPv6 address. Your compliance checklist sees “firewall installed.” Attackers see “open port.”
3) Docker’s FORWARD chain behavior and the DOCKER-USER escape hatch
Docker manipulates packet filtering to make container networking convenient. Convenience is just risk with better marketing. Docker will add rules to allow forwarding to container networks and to implement published ports.
Docker also provides a crucial hook: DOCKER-USER. It is evaluated before Docker’s own rules. If you want a policy like “only allow inbound from these CIDRs” or “block everything except specific published ports,” DOCKER-USER is where you pin that down.
But many teams implement DOCKER-USER for IPv4 only, then assume parity for IPv6. It isn’t automatic. You need the same intent in ip6tables/nftables.
4) IPv6 is designed for end-to-end reachability
IPv4’s scarcity pushed everyone into NAT. That normalized the idea that “private IPs” are safe-ish by default. IPv6 flips the default mental model: you can route globally, so you must filter intentionally. When you give a container a global IPv6 (or forward to it), you are back to a pre-NAT world.
Translation: stop treating NAT as a firewall. NAT is a side effect, not a control. Your control is your filtering policy and your bind addresses.
5) Cloud platforms hand you IPv6 whether you asked politely or not
In several clouds, enabling IPv6 on a VPC/subnet or instance is a “small checkbox” that changes the threat model of every published port on every host. If the instance has a global IPv6, and your security group / host firewall doesn’t block it, your Docker publish may be public.
Joke #1: IPv6 isn’t scary. It’s just IPv4 with enough address space to assign one to every toaster, including the one in the break room that keeps “mysteriously” rebooting.
Interesting facts & historical context (IPv6 + containers)
- IPv6 was standardized in the late 1990s, and the core RFC was updated later; it has been “the future” for longer than some production systems have lived.
- Early Docker networking (circa 2013–2014) leaned heavily on iptables, and many teams learned “Docker equals NAT” as a rule of nature. IPv6 complicates that assumption.
- IPv6 removed the checksum from the header to speed routing; operationally, it pushed complexity to endpoints and extension headers—great for performance, mixed for security tooling.
- “Happy Eyeballs” (dual-stack connection racing) made clients pick v6 or v4 dynamically; operators sometimes troubleshoot the wrong protocol because the client silently preferred IPv6.
- Linux has had robust IPv6 support for decades, but default firewall policies often lagged—many distros historically shipped permissive IPv6 rulesets even when IPv4 was locked down.
- IPv6 privacy extensions (temporary addresses) reduced tracking, but also complicate allowlists and incident response because host addresses rotate.
- Docker’s “userland proxy” used to be more common; modern setups often rely on kernel NAT rules. Which path you’re on can change what “listening on ::” means.
- Container addressing differs by driver: bridge networks, host networking, macvlan/ipvlan—IPv6 exposure risk isn’t uniform. Some drivers make routability trivial.
- Many compliance baselines historically focused on IPv4, so audits “passed” while production was reachable over IPv6. It’s not malice; it’s inertia.
Fast diagnosis playbook (check first/second/third)
When you suspect “Docker IPv6 leak” and you want an answer before the next meeting invite lands, do this in order:
First: confirm the host actually has globally reachable IPv6
- Does the host have a global IPv6 address on a public interface?
- Is there a default IPv6 route?
- Does inbound IPv6 reach the host at all (security group / edge firewall)?
If the host isn’t globally reachable via IPv6, your problem is probably internal lateral exposure (still bad, different fix).
Second: identify what is listening on IPv6 and why
- Is the service bound to
::on the host? - Is Docker publishing the port on v6 or only forwarding?
- Is the container itself listening on IPv6 inside its namespace?
Third: locate the policy gap (filtering vs forwarding)
- Input chain: is the host allowing inbound to the published port on IPv6?
- Forward chain: is traffic forwarded into the docker bridge on IPv6?
- DOCKER-USER: do you have explicit denies/allowlists for both v4 and v6?
Fourth: decide your intent
You need one of these explicit intents, not vibes:
- “No IPv6 on this host.” Disable it at the host and Docker layer.
- “IPv6 allowed, but nothing public by default.” Default-deny inbound and forward for v6; allow only what you mean.
- “Public IPv6 is fine, but only on these services.” Bind-publish explicitly and filter precisely.
Practical tasks (commands, outputs, and decisions)
Below are field-tested tasks. Each includes: a command, an example output, what it means, and the decision you make next.
Run them as root or with sudo where appropriate; the point is to get answers, not win a privilege purity contest.
Task 1: See if your host has a global IPv6 address
cr0x@server:~$ ip -6 addr show scope global
2: eth0 inet6 2001:db8:1234:5678:abcd:ef01:2345:6789/64 scope global dynamic
valid_lft 86234sec preferred_lft 14234sec
Meaning: You have a global IPv6 on eth0. If your firewall is permissive, published Docker ports may be reachable from the internet.
Decision: If you don’t need IPv6, plan to disable it. If you do need it, enforce IPv6 filtering explicitly.
Task 2: Confirm you have an IPv6 default route
cr0x@server:~$ ip -6 route show default
default via fe80::1 dev eth0 proto ra metric 100 pref medium
Meaning: The host can reach the IPv6 internet. If inbound is also allowed upstream, you’re in the blast radius.
Decision: Treat this host as internet-capable over IPv6; audit listening sockets and firewall rules now, not “after the sprint.”
Task 3: List listening sockets and spot v6 binds
cr0x@server:~$ ss -lntp
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=911,fd=3))
LISTEN 0 4096 [::]:8080 [::]:* users:(("docker-proxy",pid=2341,fd=4))
Meaning: Port 8080 is listening on all IPv6 addresses ([::]) via docker-proxy. That’s your public exposure vector.
Decision: Either bind it to a specific address, remove the publish, or firewall it off for IPv6.
Task 4: Find which container published the port
cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Ports}}'
NAMES PORTS
billing-api 0.0.0.0:8080->80/tcp, [::]:8080->80/tcp
metrics-sidecar 127.0.0.1:9100->9100/tcp
Meaning: The billing-api container is published on both IPv4 and IPv6. The metrics port is correctly constrained to loopback.
Decision: If billing-api should not be public, change publishing to 127.0.0.1:8080:80 (and [::1] if needed) or remove host publishing entirely.
Task 5: Inspect Docker daemon IPv6 settings
cr0x@server:~$ cat /etc/docker/daemon.json
{
"ipv6": true,
"fixed-cidr-v6": "fd00:dead:beef::/48",
"ip6tables": true
}
Meaning: Docker IPv6 is enabled and Docker will try to manage ip6tables rules. This is not inherently unsafe—but it is not a security policy.
Decision: If you don’t need IPv6 inside Docker, set "ipv6": false (and remove v6 CIDR settings). If you do need it, enforce default-deny in DOCKER-USER for IPv6.
Task 6: Check whether Docker is using nftables or legacy iptables tooling
cr0x@server:~$ update-alternatives --display iptables | sed -n '1,8p'
iptables - auto mode
link best version is /usr/sbin/iptables-nft
link currently points to /usr/sbin/iptables-nft
link iptables is /usr/sbin/iptables
Meaning: You’re on iptables-nft backend. Rules still work, but troubleshooting requires you to understand nftables under the hood.
Decision: Use nft list ruleset to confirm IPv6 filtering exists; don’t assume iptables output tells the whole story if other tooling manages nft.
Task 7: Inspect IPv6 filter policy (ip6tables) and look for “ACCEPT all”
cr0x@server:~$ ip6tables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER
Meaning: Default ACCEPT on INPUT/FORWARD for IPv6. That’s not “open,” that’s “actively welcoming.”
Decision: Move to default-deny, or at least add a strong policy in DOCKER-USER and tighten INPUT to only required services.
Task 8: Confirm Docker’s IPv6 forwarding rules exist (and aren’t your only control)
cr0x@server:~$ ip6tables -L FORWARD -n -v --line-numbers | sed -n '1,20p'
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 DOCKER-USER all * * ::/0 ::/0
2 0 0 DOCKER all * * ::/0 ::/0
Meaning: Docker has inserted hooks. If DOCKER-USER is empty, traffic is effectively allowed through. Docker did its part (plumbing), not yours (policy).
Decision: Populate DOCKER-USER for IPv6 to enforce your actual exposure intent.
Task 9: Add an IPv6 default-deny policy in DOCKER-USER (carefully)
cr0x@server:~$ ip6tables -I DOCKER-USER 1 -i eth0 -p tcp -m multiport --dports 80,443 -j ACCEPT
cr0x@server:~$ ip6tables -I DOCKER-USER 2 -i eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
cr0x@server:~$ ip6tables -A DOCKER-USER -i eth0 -j DROP
cr0x@server:~$ ip6tables -L DOCKER-USER -n -v --line-numbers
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT tcp eth0 * ::/0 ::/0 multiport dports 80,443
2 120 9540 ACCEPT all eth0 * ::/0 ::/0 ctstate RELATED,ESTABLISHED
3 4 240 DROP all eth0 * ::/0 ::/0
Meaning: You allow only 80/443 inbound over IPv6 to forwarded container traffic on eth0, plus established connections; everything else drops.
Decision: If your container ports should never be reachable from the public interface, keep the DROP. If you need specific ports, allowlist them explicitly.
Task 10: Verify published ports in a way humans can read
cr0x@server:~$ docker port billing-api
80/tcp -> 0.0.0.0:8080
80/tcp -> [::]:8080
Meaning: Dual-stack publish is real, not theoretical.
Decision: If you want “internal only,” republish to loopback or remove publishing and use a reverse proxy with strict binds.
Task 11: Check kernel forwarding and IPv6 RA behavior that can surprise you
cr0x@server:~$ sysctl net.ipv6.conf.all.forwarding net.ipv6.conf.default.accept_ra
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.accept_ra = 2
Meaning: IPv6 forwarding is enabled. This is fine for routers, risky for general-purpose hosts because it changes how packets traverse interfaces. accept_ra=2 means accept router advertisements even when forwarding is enabled—useful in some clouds, also a footgun if you don’t understand the routing intent.
Decision: If this is not a routing host, consider setting forwarding off and controlling Docker exposure via input rules and published binds. If you need forwarding, lock down FORWARD policies explicitly.
Task 12: See if a container has IPv6 addresses and routes
cr0x@server:~$ docker exec -it billing-api sh -lc 'ip -6 addr; ip -6 route'
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
inet6 ::1/128 scope host
42: eth0@if43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
inet6 fd00:dead:beef::42/64 scope global
default via fd00:dead:beef::1 dev eth0 metric 1024
Meaning: The container has a stable IPv6 address on a ULA prefix. That’s not public internet-routable by itself, but it’s reachable wherever that prefix is routed (often “inside the org,” which can still be too big).
Decision: Decide whether containers should have IPv6 at all. If yes, decide where that prefix is routed and filter accordingly.
Task 13: Confirm what Docker thinks about the network’s IPv6 config
cr0x@server:~$ docker network inspect bridge --format '{{json .EnableIPv6}} {{json .IPAM.Config}}'
false [{"Subnet":"172.17.0.0/16","Gateway":"172.17.0.1"}]
Meaning: Default bridge IPv6 is disabled here. If you still see IPv6 exposure, it’s likely via published ports on the host, not container-assigned global v6.
Decision: Focus on host listening sockets and host firewall rather than container addressability.
Task 14: Test reachability from the outside (or simulate it)
cr0x@server:~$ curl -g -6 -v 'http://[2001:db8:1234:5678:abcd:ef01:2345:6789]:8080/' 2>&1 | sed -n '1,12p'
* Trying 2001:db8:1234:5678:abcd:ef01:2345:6789:8080...
* Connected to 2001:db8:1234:5678:abcd:ef01:2345:6789 (2001:db8:1234:5678:abcd:ef01:2345:6789) port 8080 (#0)
> GET / HTTP/1.1
> Host: [2001:db8:1234:5678:abcd:ef01:2345:6789]:8080
> User-Agent: curl/7.88.1
> Accept: */*
Meaning: If this connects from a host outside your network, you have public exposure. If it only connects internally, you still have exposure—just to a different population (employees, VPN users, neighboring workloads).
Decision: If this should not be reachable, stop the bind and/or firewall it off now. Don’t wait for a ticket to become a postmortem.
Task 15: Check nftables ruleset for IPv6 coverage (modern systems)
cr0x@server:~$ nft list ruleset | sed -n '1,80p'
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
ct state established,related accept
iif "lo" accept
tcp dport 22 accept
}
chain forward {
type filter hook forward priority 0; policy drop;
ct state established,related accept
}
}
Meaning: Using an inet table covers both IPv4 and IPv6 with one ruleset. That’s usually what you want: fewer chances to “forget IPv6.”
Decision: If you’re not using an inet family table (or equivalent), seriously consider migrating. Dual maintenance of v4/v6 policy is how leaks are born.
Task 16: Audit Docker’s ip6tables toggle (and don’t assume it saves you)
cr0x@server:~$ docker info | sed -n '1,60p' | grep -E 'IPv6|iptables|Security Options'
IPv6: true
iptables: true
Security Options:
apparmor
seccomp
Meaning: Docker will program rules, but it will not enforce your intent. Docker’s job is connectivity; your job is “connectivity with constraints.”
Decision: Treat this as “plumbing present.” Still implement deny-by-default and explicit allowlists.
Three corporate-world mini-stories
Mini-story 1: The incident caused by a wrong assumption
A mid-size SaaS company rolled out a new internal dashboard. It was supposed to be reachable only through a corporate VPN. The app lived in a Docker container behind a simple -p 3000:3000 publish on a utility host. IPv4 access was blocked at the perimeter; life was good.
The wrong assumption was quiet: “If IPv4 is blocked, it’s blocked.” Their security group rules were IPv4-centric and their host firewall rules were iptables-only. Meanwhile, their cloud network team enabled IPv6 on the subnet as part of a broader migration, because “we’re going to need it eventually.”
Within a day, an automated scanner found the dashboard on the host’s global IPv6. Not through some clever exploit—just a plain TCP connect to the published port. The dashboard required login, but it had a password reset flow with email enumeration. That turned into a measurable incident: security review, forced resets, awkward internal comms.
The postmortem was painful for a reason: nobody did anything “weird.” Docker did what it does. The cloud did what it does. The firewall did exactly what it was told—on IPv4.
The fix was boring and effective: default-drop inbound on IPv6 at the host, explicit allowlists for published services, and a rule that every service publish must specify an address, never the implicit “all interfaces.”
Mini-story 2: The optimization that backfired
Another company got serious about latency and removed a reverse proxy hop. They moved from “host Nginx terminates TLS and proxies into Docker networks” to “publish container ports directly on the host for fewer moving parts.” Less overhead, fewer configs, fewer cert reload edge cases. On paper, it looked clean.
What they gained in microseconds they paid back in exposure. The reverse proxy had been bound to specific interfaces, had strict allowlists, and had a deliberate IPv6 stance. Direct publishing didn’t inherit any of that. Several services suddenly listened on [::] because the node was dual-stack. A couple of “internal” admin endpoints were now reachable from any IPv6-capable office network—and from the public internet in one environment where upstream filters were permissive.
The first symptom wasn’t even security. It was weird client behavior: some office clients connected over IPv6, others over IPv4, and the logging pipeline only recorded v4 headers. When incident response tried to trace a request, they were chasing ghosts: “No matching IPv4 source exists.”
They rolled back to the proxy, then reintroduced direct publishing only for carefully selected services, with explicit bind addresses and dual-stack firewall parity. The “optimization” wasn’t wrong; it was incomplete. The network stack doesn’t grade you on intent.
Mini-story 3: The boring but correct practice that saved the day
A fintech team had a policy that looked almost old-fashioned: every change to container publishing required an automated check that compared the expected listening sockets to the actual ones. It was basically a scripted ss + docker ps diff run in CI for infrastructure changes. Engineers groaned. It caught boring issues. It was not glamorous.
During a routine host rebuild, the base image changed and brought along a default IPv6 configuration plus nftables. Docker still worked, services still worked, and nobody noticed. But the check flagged a new listener: an internal metrics endpoint was now reachable on [::]:9100.
The team treated it as a defect, not a curiosity. They fixed it by binding metrics to loopback and adding an inet nftables rule to drop unexpected inbound by default. No incident, no pager, no emergency change window.
This is the kind of prevention that feels like wasted time until you watch the alternative. It’s also the kind of practice you can justify to executives without sounding like you’re selling fear: “We stop accidental exposure before it ships.”
Hardening patterns that actually work
Pattern A: Explicit binds for every published port
If you take only one thing from this piece, take this: never publish without an explicit bind address.
- Internal-only service: publish to
127.0.0.1(and optionally::1if you truly need IPv6 loopback). - Public service: publish to the specific public interface address (v4 and/or v6), not wildcard.
“But Docker Compose doesn’t make it easy.” It does. You just have to be explicit.
Pattern B: Default-deny inbound on IPv6, then allow what you mean
If the host has global IPv6, treat it like public IPv4. Same seriousness, same hygiene. That means:
- INPUT: drop inbound except SSH (from allowlisted ranges), your reverse proxy, and whatever else is truly required.
- FORWARD: drop by default; allow established flows and only the forwarding you intend into Docker networks.
- DOCKER-USER: put your container exposure policy here so Docker updates don’t “helpfully” undo your intent.
Pattern C: Prefer a single policy plane (nftables inet) where possible
If your platform supports nftables cleanly, an inet family table gives you one set of rules that applies to both IPv4 and IPv6. Fewer rulesets means fewer gaps.
This doesn’t magically make you secure. It just reduces the number of places you can forget to do security.
Pattern D: If you don’t need Docker IPv6, disable it intentionally
This is not an ideological statement. It’s an operational one. If you don’t have an IPv6 requirement for containers, you’re buying complexity without payoff.
Disable at the Docker layer and optionally at the host layer, depending on your environment’s needs. Host-level disable can have side effects in modern clouds, so do it with eyes open.
Pattern E: Run public ingress through one choke point
A reverse proxy or ingress controller is not just “another hop.” It’s where you centralize:
- TLS policy and cert rotation
- Authentication, rate limiting, request size limits
- Access logs you can actually correlate
- Explicit v4/v6 binding behavior
Direct port publishing is fine for truly public services with good hygiene. It’s a trap for “internal” services that were never designed for the outside.
Quote (paraphrased idea) from James Hamilton (Amazon/AWS reliability engineering): “Everything fails; design so failures are contained and recoverable.”
Joke #2: The first rule of IPv6 club is you don’t talk about IPv6 club. The second rule is your container does.
Common mistakes: symptom → root cause → fix
These are the ones I keep seeing in real systems. The symptoms are usually confusing because IPv4 checks look fine.
1) Symptom: “Our port is blocked by firewall, but scanners still hit it”
Root cause: IPv4 firewall is configured; IPv6 is default-allow (or upstream allows it). Service is published on [::].
Fix: Add equivalent IPv6 filtering (nftables inet preferred), or explicitly bind publishes to IPv4 only, and verify with ss -lntp.
2) Symptom: “Only some clients can reach the service; logs don’t match”
Root cause: Dual-stack clients prefer IPv6 sometimes. Your observability pipeline and allowlists were IPv4-only.
Fix: Ensure logging captures remote v6, update allowlists to include IPv6 ranges, or disable IPv6 exposure for that service.
3) Symptom: “We disabled iptables rules, but Docker still exposes ports”
Root cause: Userland proxy or host-level listeners still accept connections; or nftables rules exist despite iptables view.
Fix: Verify actual listeners with ss. Inspect nftables with nft list ruleset. Don’t rely on one tool’s view.
4) Symptom: “Container has a ULA IPv6, but it’s reachable from other networks”
Root cause: ULAs can be routed internally. They’re not “private” in the NAT sense; they’re “not globally assigned.” Internal routing made them reachable.
Fix: Filter at boundaries, constrain routes, or avoid assigning IPv6 to containers that don’t need it.
5) Symptom: “We set DOCKER-USER rules; IPv6 still leaks”
Root cause: Rules were added for iptables (IPv4) only, not ip6tables; or rules match wrong interface; or traffic hits INPUT (host listener) not FORWARD.
Fix: Mirror policy in ip6tables or use nftables inet. Confirm whether the socket is host-level (docker-proxy) vs forwarded.
6) Symptom: “Disabling IPv6 broke package updates / cloud metadata”
Root cause: Your environment expects IPv6 for certain endpoints, or DNS returns AAAA first and your resolver behavior changes.
Fix: Don’t blanket-disable host IPv6 without testing. Instead, keep IPv6 but default-deny inbound and publish explicitly.
Checklists / step-by-step plan
Step-by-step: Lock down a Docker host that accidentally became dual-stack
-
Inventory exposure.
Runss -lntpanddocker psto find published ports and v6 listeners. -
Decide your stance.
Pick one: no IPv6, IPv6 internal-only, or IPv6 public for specific services. -
Fix binds.
Update Compose/service definitions to publish with explicit addresses (loopback for internal). -
Enforce policy in firewall.
Use nftablesinetwhere possible; otherwise mirror iptables/ip6tables. -
Use DOCKER-USER for container-forward policy.
Default-drop, allowlist required inbound. -
Verify from outside.
Test IPv6 reachability with curl to the host’s global IPv6 and the published ports. -
Make it stick.
Persist firewall rules. Ensure Docker restarts don’t wipe your policy. Add a CI check that flags new listeners.
Checklist: What “secure enough” looks like for most teams
- Every published port specifies a bind address (no implicit wildcard).
- Firewall policy is default-deny for inbound IPv6 (and IPv4) with explicit allows.
- DOCKER-USER chain exists and contains your org’s intent (v4 and v6).
- At least one external scan/health check tests IPv6 reachability, not just IPv4.
- Logs include remote IPv6 addresses and your alerting doesn’t drop them on parse.
- Cloud security groups / NACLs include IPv6 rules, reviewed like IPv4.
Checklist: If you truly want “no IPv6 here”
- Docker daemon
"ipv6": falseunless required. - Host firewall blocks inbound IPv6 regardless (defense in depth).
- Cloud network config doesn’t assign global IPv6 to the instance unless required.
- Monitoring still works (some agents may prefer IPv6).
FAQ
1) Is this a Docker bug?
Usually no. It’s Docker doing what it was designed to do: make networking work with minimal operator input. The “bug” is assuming minimal input equals minimal exposure.
2) If my container only listens on IPv4, can it still be reachable over IPv6?
Yes, depending on how the host publishes the port and whether a proxy translates/accepts on v6 and forwards to v4 locally. Confirm with ss on the host and real connection tests.
3) Is binding to 127.0.0.1 enough?
For IPv4 exposure, yes. For IPv6, you must also ensure you didn’t publish on [::]. In Compose, be explicit; don’t rely on defaults. Verify with docker ps and docker port.
4) Should I disable IPv6 everywhere to be safe?
If you don’t need it, disabling can reduce risk. But blanket host-level disable can break cloud networking assumptions and troubleshooting. A more resilient default is: keep IPv6 enabled, default-deny inbound, and publish explicitly.
5) Why does my IPv6 firewall look empty even though I set rules?
You may be looking at ip6tables while the system actually uses nftables, or vice versa. Always check listeners (ss) and inspect nftables rulesets if you’re on iptables-nft.
6) What’s the single best place to enforce container ingress policy?
The DOCKER-USER chain (or equivalent in nftables) is the best hook for “before Docker does its thing.” It’s designed for operator policy.
7) Does Kubernetes have the same problem?
Different mechanics, same class of failure. NodePorts, hostNetwork pods, and CNI IPv6 support can expose services over IPv6 if your node firewall and cloud rules aren’t symmetric.
8) How do I prove to myself that I fixed it?
Three proofs: (1) ss -lntp shows no unexpected [::] listeners, (2) firewall policy for IPv6 is explicit (default-deny or strict allowlists), and (3) an external IPv6 connection test fails for ports that should be private.
9) What about “ipv6″: true in Docker—does that automatically make things public?
Not automatically. Enabling IPv6 gives containers v6 addresses on configured networks and may set up forwarding rules. Public exposure still depends on routing and filtering. But it increases the number of ways to be surprised, so pair it with explicit policy.
Conclusion: next steps you can execute
Docker IPv6 leaks aren’t mysterious. They’re a predictable outcome of dual-stack hosts, implicit publishes, and firewall policy that only speaks IPv4. You don’t fix them with hope. You fix them with explicit binds and default-deny rules that cover both protocol families.
Do this next, in this order:
- Run
ip -6 addr,ss -lntp, anddocker psto identify real exposure. - Decide whether IPv6 is required on this host and in these containers.
- Make publishing explicit (bind addresses) and centralize public ingress when possible.
- Enforce policy in
DOCKER-USER(IPv4 and IPv6) or nftables inet with default-deny. - Verify from the outside, and add an automated check so this doesn’t regress quietly next quarter.