Docker IPv6 in Containers: Enable It Properly (and Avoid Surprise Leaks)

Was this helpful?

You turn on IPv6 in Docker because product needs “real internet addresses,” or because IPv4 exhaustion is finally your problem.
The containers come up, things mostly work… and then you notice outbound traffic bypassing your carefully curated IPv4 NAT and egress controls.
Or worse: a service is reachable over IPv6 from places it absolutely should not be.

Docker’s IPv6 story is workable, but it’s not “flip the switch and go home.” You have to decide: routed IPv6, NAT66 (yes, it exists),
or “no global egress unless explicitly allowed.” Then you have to enforce it with the firewall you actually run, not the one you wish you ran.

What goes wrong with Docker IPv6 (and why it surprises adults)

Docker’s default mental model is “containers live behind IPv4 NAT on a bridge.” You publish ports, and inbound works. Outbound works.
You set a couple of iptables rules, and you feel in control.

IPv6 changes the contract. If your host has global IPv6 and you hand global (or globally-routable) IPv6 to containers, they’re no longer
inherently behind NAT. That means:

  • Egress control can silently fail. Your IPv4-only controls don’t apply to IPv6. The container happily talks to the world over v6.
  • Ingress exposure can surprise you. If you accidentally allow forwarding (or Docker inserts permissive rules), containers may be reachable.
  • Observability gaps show up. Flow logs, proxy logs, and “what IP did we use” dashboards often assume IPv4.
  • DNS can steer traffic around policy. AAAA answers + Happy Eyeballs can pick IPv6 even when you “intended” IPv4.

One more twist: Docker has multiple places where IPv6 can be “enabled,” and they don’t all mean the same thing.
There’s daemon-level IPv6, per-network IPv6, address pools, and then whatever your host kernel and firewall are actually doing.
When people say “IPv6 is enabled,” ask: “Enabled where, and with what egress policy?”

Facts and historical context (to calibrate your instincts)

Some small, concrete facts help prevent big, expensive assumptions:

  1. IPv6 was standardized in the late 1990s (RFC 2460 era) largely because IPv4 exhaustion was a slow-motion train wreck everyone could see coming.
  2. NAT was never the security feature people treated it as. It hid hosts, but it didn’t replace a firewall; IPv6 removes that accidental “comfort blanket.”
  3. IPv6’s huge address space changed allocation thinking. The goal was to restore end-to-end addressing and reduce stateful address translation.
  4. Happy Eyeballs exists because IPv6 rollout was uneven. Clients race IPv6 and IPv4 to avoid user-visible slowness when one path is broken.
  5. Docker originally leaned heavily on iptables behavior. As systems moved to nftables backends and different distros, “magic” became more fragile.
  6. Linux has had strong IPv6 support for decades, but the default sysctls and RA (router advertisement) behaviors vary by distro and cloud image.
  7. NAT66 is real but controversial. It exists for certain transition and policy needs, but it’s not the IPv6 “happy path.”
  8. Many corporate networks still block inbound IPv6 by policy while allowing outbound IPv6, which is exactly how “surprise leaks” happen.
  9. Cloud IPv6 is often routed, not bridged. You frequently get a /64 (or larger) routed to an interface; your job is to route prefixes to container nets cleanly.

A sane mental model: bridged L2, routed L3, and the “leak” you didn’t mean

Docker’s default bridge network is L2-ish on the host: containers get an interface in a Linux bridge, and the host performs L3 routing/forwarding.
With IPv4, Docker typically SNATs outbound traffic to the host’s IP (MASQUERADE). So the world sees the host, not the container.

With IPv6, you have options. If you assign globally routable IPv6 to containers and you route that prefix properly, the world can see container addresses.
That’s fine—good, even—if you wanted it and you firewall accordingly.
It’s a problem when your organization’s “controls” were implemented as:

  • “Only allow outbound via our IPv4 proxy.”
  • “Only allow inbound to published ports.”
  • “Our security team watches the NAT gateway logs.”

The leak is simple: IPv6 becomes a second, less-governed internet path. Your container can resolve AAAA records and connect directly.
No proxy. No NAT gateway. No logs where you expected them.

One quote to keep on your wall, not because it’s poetic, but because it’s operationally true:
Paraphrased idea: “Hope is not a strategy.” — attributed to various engineers and leaders in ops circles.

Design choices: routed IPv6 vs NAT66 vs no-global-by-default

Decide what you want. If you don’t, your environment will decide for you, usually during an incident call.

Option A: Routed IPv6 (recommended when you can do it cleanly)

Containers get addresses from a prefix you control. You route that prefix from your upstream router (or cloud network) to the Docker host,
then the host routes to container bridges. No NAT. Clean end-to-end. Easy to reason about once you accept that firewalling is mandatory.

Pros: real IPv6, no translation state, better transparency, can do inbound properly with firewall + published services.
Cons: requires prefix delegation or routed subnet; you must implement IPv6 firewalling intentionally.

Option B: NAT66 (acceptable as a policy hack, not a lifestyle)

You can keep the “containers are hidden behind the host” posture by NATing IPv6 egress.
It’s not “pure IPv6,” but it can be a pragmatic control point when upstream routing is hard or when policy demands a single egress identity.

Pros: keeps egress identity stable, reduces inbound exposure by default.
Cons: introduces translation state, breaks end-to-end assumptions, and can confuse troubleshooting.

Option C: No global IPv6 for containers (ULA-only + controlled egress)

Give containers only ULA (fd00::/8) internally, and force egress through a proxy or gateway that you control.
This aligns with “containers should not talk to the internet unless explicitly allowed.”

Pros: strongest default control, least surprise exposure.
Cons: you’ll build or run a gateway anyway; some apps expect direct IPv6.

Pick one and write it down. Make it a platform invariant. Otherwise each team will invent their own IPv6 “just this once” approach.

Enable IPv6 properly: daemon, networks, and address plan

There are two big ways people get burned:

  • They enable Docker IPv6 but don’t plan the prefix, so address selection becomes accidental and inconsistent across hosts.
  • They plan the prefix but forget the firewall, so “routed IPv6” becomes “public container zoo.”

Address planning that won’t haunt you

You want stable, non-overlapping prefixes per Docker host or per cluster segment. In a routed IPv6 design, a common pattern is:

  • Allocate a larger prefix to your environment (e.g., a /56 or /48 from your provider).
  • Assign a /64 per Docker bridge network (because SLAAC and many stacks assume /64 boundaries).
  • Route those /64s to the host, then let Docker assign addresses within them.

If you cannot obtain routable space, ULA can work internally, but be honest: ULA does not magically give you internet connectivity.
You’ll still need NAT66 or a proxy/gateway.

Docker daemon knobs that matter

Docker’s IPv6 enablement typically starts in /etc/docker/daemon.json. The exact knobs depend on your goals:

  • "ipv6": true enables IPv6 support for the default bridge.
  • "fixed-cidr-v6": "2001:db8:.../64" (example prefix) assigns a fixed IPv6 subnet to the default bridge network.
  • "ip6tables": true tells Docker to manage IPv6 firewall rules. This can help, but don’t outsource your policy to a daemon.

If you run multiple user-defined bridge networks, you’ll likely use docker network create --ipv6 with explicit subnets instead of relying solely on the default bridge.

Joke #1: IPv6 is like moving into a house with a lot more rooms—great until you realize you forgot to install doors.

Firewalling Docker IPv6: stop relying on vibes

Your policy must exist in the host firewall, not in a spreadsheet. With IPv6, the host is a router. Treat it like one.
That means:

  • Explicit default-drop on forwarded traffic, then allow what you mean.
  • Explicit egress policy (especially if you run proxies, DLP controls, or audit gateways).
  • Logging in the right places so “why can’t it connect” isn’t a two-hour archaeology project.

iptables vs nftables reality

Many distros now use nftables under the hood even when you type iptables. Docker historically managed iptables rules directly.
This can lead to “it worked last year, now it doesn’t” behavior after upgrades.

Your job: know which firewall backend is active, and test rules with real traffic.
If you’re using nftables explicitly, write nftables rules. Don’t count on Docker to keep you safe.

Preventing surprise IPv6 egress

The single most common “leak” is: IPv4 egress goes through your NAT/proxy; IPv6 egress goes direct.
The fix is equally simple and equally ignored: implement outbound IPv6 controls on the host’s forwarding path.

If your policy is “containers must use proxy,” then your firewall should block direct IPv6 egress except to the proxy.
If your policy is “containers can talk out,” then allow it, but make sure you can observe it and rate-limit/segment it if needed.

DNS and happy eyeballs: where “it works on my laptop” is born

Once IPv6 is available, DNS returns AAAA records. Many clients will prefer IPv6 or will race IPv6 and IPv4.
That means “we blocked IPv4 to that destination” is not equivalent to “we blocked the destination.”

Check your resolvers. Check your container base images. Some images ship with resolv.conf tweaks, or have different libc behavior.
If you run internal DNS that synthesizes AAAA (DNS64), you’ll get IPv6 egress even when the service is IPv4-only upstream.

The operational takeaway: diagnose connectivity with both A and AAAA paths, and don’t treat “ping works” as proof of application reachability.

Practical tasks: commands, outputs, and decisions (12+)

These are the tasks I actually run when I’m enabling IPv6 in Docker or debugging it at 2 a.m.
Each includes: command, example output, what it means, and the decision you make.

Task 1: Confirm the host actually has IPv6 and a default route

cr0x@server:~$ ip -6 addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
    inet6 2001:db8:10:20::15/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe4e:66a1/64 scope link
       valid_lft forever preferred_lft forever

Meaning: You have a global IPv6 and a link-local. Good start.
Decision: If there’s no global IPv6, stop. You can’t do routed IPv6 without upstream support. Consider ULA + gateway or fix the host’s IPv6 first.

cr0x@server:~$ ip -6 route show default
default via 2001:db8:10:20::1 dev eth0 metric 100

Meaning: The host has a default IPv6 route.
Decision: If default route is missing, containers will fail outbound even if addresses exist. Fix upstream routing/RA/static routes before blaming Docker.

Task 2: Check kernel forwarding and IPv6 sysctls

cr0x@server:~$ sysctl net.ipv6.conf.all.forwarding net.ipv6.conf.default.forwarding
net.ipv6.conf.all.forwarding = 0
net.ipv6.conf.default.forwarding = 0

Meaning: Host is not forwarding IPv6 packets. Containers can have IPv6 but won’t route out via the host.
Decision: For routed IPv6, set forwarding=1 (and ensure firewall policy). If you intend “no forwarding,” then this is correct—enforce egress another way.

Task 3: Inspect Docker daemon IPv6 configuration

cr0x@server:~$ sudo cat /etc/docker/daemon.json
{
  "ipv6": true,
  "fixed-cidr-v6": "2001:db8:dead:beef::/64",
  "ip6tables": true
}

Meaning: Docker IPv6 is enabled on the default bridge with a fixed /64, and Docker will manage IPv6 iptables rules.
Decision: Validate that this /64 is routed to the host (or at least reachable). If it’s random or overlaps elsewhere, fix the plan before restarting Docker.

Task 4: Restart Docker and confirm it applied the config

cr0x@server:~$ sudo systemctl restart docker
cr0x@server:~$ systemctl status docker --no-pager -l
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled)
     Active: active (running) since Tue 2026-01-02 09:18:31 UTC; 6s ago
     Docs: https://docs.docker.com
   Main PID: 1462 (dockerd)
     Tasks: 19
     Memory: 76.3M
     CGroup: /system.slice/docker.service
             └─1462 /usr/bin/dockerd -H fd://

Meaning: Docker restarted cleanly.
Decision: If it fails to start, check journal logs for JSON syntax errors or unsupported options in your Docker version.

Task 5: Verify the default bridge has an IPv6 subnet

cr0x@server:~$ docker network inspect bridge --format '{{json .IPAM.Config}}'
[{"Subnet":"172.17.0.0/16","Gateway":"172.17.0.1"},{"Subnet":"2001:db8:dead:beef::/64","Gateway":"2001:db8:dead:beef::1"}]

Meaning: The default bridge now has an IPv6 /64 and a gateway address.
Decision: If IPv6 subnet is absent, your daemon config isn’t applied or IPv6 is disabled. Fix before moving on.

Task 6: Create a user-defined network with explicit IPv6

cr0x@server:~$ docker network create --driver bridge --ipv6 --subnet 2001:db8:dead:cafe::/64 appv6
a1d8d3f6a2e7b2c1c0c9b5c0d74a2c1c4f6a8a1c2e1f0b9e8d7c6b5a4f3e2d1

Meaning: You now have an IPv6-enabled bridge network with an explicit /64. This is usually easier to manage than the default bridge.
Decision: If your org wants per-app segmentation, do this per stack and apply firewall policy per bridge interface.

Task 7: Run a container and confirm it gets IPv6

cr0x@server:~$ docker run --rm --network appv6 alpine sh -c "ip -6 addr show dev eth0"
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
    inet6 2001:db8:dead:cafe::2/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever

Meaning: Container has a global-ish IPv6 from your subnet and a link-local address.
Decision: If it only has link-local, Docker IPv6 isn’t properly enabled for that network or the subnet config didn’t apply.

Task 8: Test outbound IPv6 connectivity from the container

cr0x@server:~$ docker run --rm --network appv6 alpine sh -c "apk add --no-cache curl >/dev/null; curl -6 -sS -m 3 https://ifconfig.co/ip"
2001:db8:10:20::15

Meaning: Outbound IPv6 works, and egress appears as the host address (in this example service output). Depending on routing/NAT, it could show container IP or host IP.
Decision: If policy requires egress via proxy, this “direct IPv6” is a leak. Block it or route through the proxy explicitly.

Task 9: See whether Docker installed IPv6 firewall rules (and whether they’re sane)

cr0x@server:~$ sudo ip6tables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -j RETURN

Meaning: DOCKER-USER exists but is empty (RETURN). Docker won’t enforce your policy by default.
Decision: Put your container egress/ingress policy in DOCKER-USER (or nftables equivalent) so it survives Docker’s dynamic rules.

Task 10: Check whether forwarding is allowed by firewall (nftables path)

cr0x@server:~$ sudo nft list ruleset | sed -n '1,120p'
table inet filter {
  chain input {
    type filter hook input priority 0; policy drop;
    iif "lo" accept
    ct state established,related accept
  }
  chain forward {
    type filter hook forward priority 0; policy drop;
    ct state established,related accept
  }
  chain output {
    type filter hook output priority 0; policy accept;
  }
}

Meaning: Default-drop on forward. Good baseline: nothing transits unless you allow it. Docker might still insert rules elsewhere depending on setup.
Decision: Add explicit allows for the Docker bridge interfaces to the destinations you want. If your forward chain is accept-by-default, expect “surprise exposure.”

Task 11: Confirm which interfaces Docker created and map them to networks

cr0x@server:~$ ip link show type bridge
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
8: br-7b3b9c2f7a12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500

Meaning: docker0 (default bridge) and a user-defined bridge (br-…).
Decision: Use per-bridge firewall rules. If you lump everything into one allowlist, you lose segmentation and debugging clarity.

Task 12: Verify routes to the container subnet exist on the host

cr0x@server:~$ ip -6 route show | grep -E 'dead:beef|dead:cafe'
2001:db8:dead:beef::/64 dev docker0 proto kernel metric 256
2001:db8:dead:cafe::/64 dev br-7b3b9c2f7a12 proto kernel metric 256

Meaning: Host has connected routes to container /64s via the Docker bridges.
Decision: If these routes are missing, Docker network config is wrong or the bridge isn’t up. Fix before touching upstream routing.

Task 13: Check upstream reachability to the container subnet (routing is not optional)

cr0x@server:~$ ping6 -c 2 2001:db8:dead:cafe::2
PING 2001:db8:dead:cafe::2(2001:db8:dead:cafe::2) 56 data bytes
64 bytes from 2001:db8:dead:cafe::2: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 2001:db8:dead:cafe::2: icmp_seq=2 ttl=64 time=0.118 ms

--- 2001:db8:dead:cafe::2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms

Meaning: From host to container is fine (local). This does not prove the outside world can reach the container subnet.
Decision: If you need inbound to containers, ensure your upstream router has routes for these /64s pointing to the host.

Task 14: Detect “IPv6 bypasses proxy” in one minute

cr0x@server:~$ docker run --rm --network appv6 alpine sh -c "apk add --no-cache bind-tools curl >/dev/null; nslookup example.com | sed -n '1,12p'"
Server:         127.0.0.11
Address:        127.0.0.11:53

Non-authoritative answer:
Name:   example.com
Address: 93.184.216.34
Name:   example.com
Address: 2606:2800:220:1:248:1893:25c8:1946

Meaning: The container sees both A and AAAA. Many clients will use IPv6 if it works.
Decision: If your security posture assumes proxy/NAT logging, implement IPv6 egress controls now, not after audit asks why data moved outside controls.

Joke #2: If you’re not firewalling IPv6, you’re basically running production with the screen door set to “demo mode.”

Fast diagnosis playbook

When IPv6 in containers is broken (or too “working”), you want a short path to truth.
Here’s the order that finds the bottleneck quickly.

First: host-level IPv6 health (don’t start inside the container)

  • Does the host have a global IPv6 address on the uplink interface?
  • Is there a default IPv6 route?
  • Is IPv6 forwarding enabled if you’re routing container traffic?

If host IPv6 is shaky, container IPv6 is just theater.

Second: Docker network plumbing

  • Does the Docker network have an IPv6 subnet and gateway?
  • Does the container get a global/ULA IPv6 address on eth0?
  • Does the host have a connected route to that subnet via the bridge?

Third: firewall and policy (the usual culprit)

  • Are forwarded IPv6 packets allowed from container bridge to uplink?
  • Is ICMPv6 being blocked incorrectly (breaks PMTU, neighbor discovery, and general sanity)?
  • Is your egress policy implemented for IPv6, or only IPv4?

Fourth: DNS and application behavior

  • Is DNS returning AAAA? Is the app using IPv6 unexpectedly?
  • Is there DNS64 or an internal resolver rewriting behavior?
  • Do you see different behavior with curl -4 vs curl -6?

Fifth: upstream routing (only if you need inbound)

  • Are routes for container /64s present on the upstream router/cloud route table?
  • Is reverse path filtering or anti-spoofing dropping traffic?

Common mistakes: symptom → root cause → fix

These are not “theoretical.” These are the ones that show up in incident timelines.

1) Containers have IPv6 addresses but can’t reach anything over IPv6

Symptom: curl -6 times out; DNS shows AAAA; IPv4 works.
Root cause: Host IPv6 forwarding disabled, or firewall forward chain drops IPv6.
Fix: Enable net.ipv6.conf.all.forwarding=1 for routed designs and allow forwarding for the Docker bridge interface; keep default-drop but add specific allows.

2) IPv6 egress bypasses corporate proxy/NAT logging

Symptom: Security team sees traffic to external IPv6 destinations not visible in IPv4 NAT logs.
Root cause: No IPv6 egress policy; clients prefer IPv6 due to AAAA + Happy Eyeballs.
Fix: Block direct IPv6 egress from container bridges except to approved gateways/proxies, or force proxy usage; validate with curl -6 tests.

3) A container is reachable from the internet over IPv6 without publishing ports

Symptom: External scan hits a service listening on container IPv6; team swears they didn’t publish it.
Root cause: Routed IPv6 + permissive forwarding/input rules allow inbound; Docker doesn’t “hide” services like NAT did in IPv4 land.
Fix: Default-drop inbound/forwarded IPv6; explicitly allow only required ports to specific containers; consider separate networks for public-facing workloads.

4) Things work on one host but not another

Symptom: Same compose file, different behavior across nodes.
Root cause: Different firewall backend (iptables vs nft), different sysctls, or different upstream IPv6 provisioning (some nodes have v6 routes, some don’t).
Fix: Standardize host baseline: sysctls, firewall tooling, daemon.json, and IPv6 provisioning validation in CI for node images.

5) Random stalls and weird MTU issues over IPv6

Symptom: Large transfers hang; small requests succeed; logs show retransmits.
Root cause: ICMPv6 blocked (PMTU discovery breaks), or MTU mismatch on overlay/underlay.
Fix: Allow essential ICMPv6 types; verify path MTU; avoid blanket “block all ICMP” rules that came from 2004 blogs.

6) “We enabled IPv6” but only link-local addresses exist

Symptom: Container shows only fe80:: addresses.
Root cause: Network not created with --ipv6 or daemon-level IPv6 not enabled; missing fixed-cidr-v6 for default bridge.
Fix: Enable IPv6 in daemon and/or per-network; re-create networks as needed (carefully, with maintenance plan).

Three corporate mini-stories from the IPv6 trenches

Mini-story 1: The incident caused by a wrong assumption

A mid-sized SaaS company migrated a busy ingestion service into containers on a fleet of Linux hosts. IPv4 egress was tightly controlled:
all outbound traffic from workloads went through a monitored NAT gateway and a proxy tier. Audit liked it. Finance liked it. Everyone slept.

The platform team enabled IPv6 on the hosts because a new partner API was “IPv6-first,” and they wanted to avoid yet another gateway translation layer.
They flipped Docker’s IPv6 setting, validated that containers could reach the partner over IPv6, and moved on.

Two weeks later, security flagged unusual outbound connections to external destinations that didn’t show up in the NAT gateway logs.
It wasn’t malware. It was normal application telemetry, direct to vendor endpoints, choosing IPv6 because the vendor had AAAA records.
Nobody had decided whether telemetry traffic was allowed to bypass the proxy. It just did.

The incident response was awkward because nothing was “hacked.” The system did exactly what it was configured to do.
The wrong assumption was cultural: people thought “NAT equals egress control.” IPv6 removed NAT, and therefore removed the illusion.

Fixing it was straightforward but politically delicate: they implemented explicit IPv6 egress rules on the Docker hosts and forced outbound HTTP(S)
through a controlled proxy, for both IPv4 and IPv6. The lesson that stuck wasn’t about Docker; it was that network policy must be explicit per protocol family.

Mini-story 2: The optimization that backfired

A large enterprise had a container platform where IPv6 was “supported,” but only in a limited way: ULA inside, NAT66 at the edge.
The networking team wanted to reduce state and simplify troubleshooting, so they proposed “routed IPv6 everywhere.”
Sounds clean. It is, when you do it deliberately.

They rolled out routed /64s per host and allowed forwarding broadly because they didn’t want to break workloads.
The plan was to tighten firewall rules later, after they observed typical traffic patterns.
That “later” was not scheduled. It was just a mood.

Within a month, a vulnerability scan found several internal-only admin UIs reachable over IPv6 from adjacent networks.
They weren’t “public internet exposed,” but they were exposed beyond the intended segment, which was enough to trigger a compliance scramble.
The services had never been reachable over IPv4 due to NAT and default-deny in that direction, so owners didn’t consider them risky.

The backfire wasn’t routed IPv6. Routed IPv6 was the right architecture. The backfire was the “temporary permissive firewall.”
Optimization without guardrails is just speed-running into a postmortem.

They ended up doing the work they should have done first: default-drop forwarding, explicit allows by network and port, and a formal “public vs private” network split.
Routed IPv6 stayed. The “we’ll tighten later” habit did not.

Mini-story 3: The boring but correct practice that saved the day

A payments-adjacent company ran container workloads with strict change control. Not fun, but effective.
When they started adding IPv6, they wrote a one-page standard: address plan, per-host prefix allocation, and firewall invariants.
Every host build validated the same sysctls and firewall rules. Every Docker network was created explicitly with a known /64.

Months later, a cloud image update changed firewall behavior subtly: the distro moved to nftables rules with a different default policy.
On a less disciplined platform, this is where IPv6 would either silently stop working or silently open up.
Here, the host validation tests failed during bake: containers lost outbound IPv6 because forwarding rules weren’t present.

The fix was boring: update the nftables ruleset template, re-run the validation, bake again.
No incident. No emergency change. No “why does only IPv6 break?”

The practice that saved them was also the least glamorous: treat IPv6 as a first-class citizen in baseline configuration tests.
If you only test IPv4 in your golden image pipeline, you’re choosing to be surprised later.

Checklists / step-by-step plan

Here’s the plan I’d use to roll out Docker IPv6 in a production environment where “oops” costs money.
Pick the design choice first, then execute.

Checklist A: Pre-flight (host and upstream)

  1. Confirm the host has stable global IPv6 and a default route.
  2. Decide: routed IPv6, NAT66, or ULA-only.
  3. Obtain or allocate a prefix plan (per host or per network). Avoid overlaps across environments.
  4. Set sysctls: enable IPv6 forwarding if routing containers; ensure RA behavior matches your environment.
  5. Define firewall baseline: default-drop forwarding; allow required ICMPv6; decide on outbound controls.

Checklist B: Docker configuration

  1. Set /etc/docker/daemon.json with "ipv6": true and a "fixed-cidr-v6" if using default bridge.
  2. Prefer user-defined networks with explicit IPv6 subnets for apps.
  3. Restart Docker during a maintenance window if it impacts running workloads.
  4. Validate network inspect output includes IPv6 subnet/gateway.

Checklist C: Policy enforcement (avoid leaks)

  1. Implement IPv6 egress policy: allow only what you mean (proxy, specific destinations, or full outbound).
  2. Implement IPv6 ingress policy: default-drop, allow only published services, and ensure container subnets aren’t broadly reachable.
  3. Log drops in a rate-limited way so you can debug without turning your disk into a bonfire.
  4. Test AAAA-driven behavior: compare curl -4 and curl -6 from containers.

Checklist D: Validation gates before rollout

  1. Connectivity: container to internet over IPv6 (if allowed) and to internal dependencies.
  2. Leak test: confirm direct IPv6 egress is blocked if policy requires proxy.
  3. Exposure test: confirm containers are not reachable over IPv6 unless intended.
  4. Observability: confirm logs/metrics include IPv6 (client IP parsing, dashboards, alerting).
  5. Upgrade test: ensure firewall tooling changes (iptables/nftables) are caught in image validation.

FAQ

1) Should I enable Docker IPv6 globally or per network?

Per network, with explicit subnets, is usually cleaner. Enable at daemon level to allow it, then create user-defined networks with --ipv6.
You’ll thank yourself when you need segmentation and predictable addressing.

2) Do I need a /64 for every Docker network?

Practically, yes if you want to avoid surprises. A lot of IPv6 tooling and assumptions revolve around /64. Smaller prefixes can work in some routed designs,
but you’ll spend your time fighting edge cases rather than running services.

3) Is NAT66 “wrong”?

It’s not a moral failure. It’s a trade. If your upstream routing is constrained or you need a single egress identity, NAT66 can be pragmatic.
Just document it and expect some troubleshooting complexity.

4) Why did my egress controls stop working after enabling IPv6?

Because your controls were IPv4-specific (NAT gateway logs, IPv4 firewall rules, IPv4-only proxy enforcement). IPv6 created a second path.
Fix by enforcing policy for IPv6 explicitly—don’t try to “hope” clients keep using IPv4.

5) Can containers be reachable over IPv6 without publishing ports?

Yes, in routed IPv6 designs, if your firewall allows it and routing exists. IPv4 NAT often masked this reality.
With IPv6, assume that a listening socket is reachable unless you block it.

6) Do I need to allow ICMPv6? Can’t I just block it like we did with ICMP?

You need essential ICMPv6 for neighbor discovery and PMTU discovery. Over-blocking ICMPv6 leads to weird, intermittent failures that waste weekends.
Allow specific types; don’t blanket-drop.

7) Why does curl use IPv6 sometimes even when IPv4 works?

DNS returns AAAA records, and client libraries often prefer IPv6 or race IPv6/IPv4 (Happy Eyeballs).
If IPv6 is available and not blocked, it will be used. Treat that as expected behavior, not betrayal.

8) How do I prevent IPv6 leaks without breaking everything?

Start by blocking outbound IPv6 from container bridge interfaces except to known gateways (proxy, update mirrors if needed).
Then add exceptions intentionally. Validate with curl -6 and DNS AAAA checks.

9) Does Docker Compose support IPv6 networks?

Yes, via defining networks with IPv6 enabled and explicit subnets. The key is to ensure the daemon and host support IPv6 and that your firewall policy is consistent.

10) What’s the quickest proof that a container is “escaping” via IPv6?

Resolve a known dual-stack domain and force IPv6: nslookup to see AAAA, then curl -6 to a public endpoint.
If it succeeds while your IPv4 controls would have blocked it, you have an egress policy gap.

Conclusion: next steps that won’t embarrass you in two quarters

Enabling IPv6 in Docker is not hard. Enabling it safely is a discipline.
The work is mostly not “Docker.” It’s routing and policy: address planning, forwarding, firewalling, DNS behavior, and observability.

Practical next steps:

  1. Write down your design choice (routed IPv6, NAT66, or ULA-only) and the security posture that comes with it.
  2. Implement host baseline checks: IPv6 address, default route, forwarding sysctls, and firewall backend consistency.
  3. Create explicit IPv6 Docker networks with known /64s; stop letting “defaults” be architecture.
  4. Enforce IPv6 egress policy on the forwarding path so IPv6 can’t bypass your controls.
  5. Add validation to your image pipeline: run curl -6 tests, DNS AAAA checks, and a minimal exposure scan in staging.

Do those, and IPv6 becomes just another transport your platform supports—boring, predictable, and not a surprise side-channel to the internet.

← Previous
Laptop i7, slow as a potato: when power limits laugh at buyers
Next →
WordPress .htaccess Broke the Site: Restore a Safe Default Properly

Leave a comment