Docker Desktop Networking Weirdness: LAN Access, Ports, and DNS Fixes That Actually Work

Was this helpful?

You run docker run -p 8080:80, hit localhost:8080, and it works. You hand the URL to a coworker on the same Wi‑Fi, and… nothing.
Or your container can curl the internet but can’t reach the NAS on your LAN. Or DNS flips a coin every time your VPN connects.

Docker Desktop networking isn’t “broken.” It’s just not the Linux host networking model you think you’re using.
It’s a VM, a NAT, a pile of platform-specific shims, and a handful of special names that exist mostly to save our sanity.

The mental model: why Docker Desktop is different

On Linux, Docker typically plugs containers into a bridge network on the host, uses iptables/nftables to NAT outbound traffic,
and adds DNAT rules for published ports. Your host is the host. The kernel that runs containers is the same kernel that runs your shell.

Docker Desktop on macOS and Windows is different by design. It runs a small Linux VM (or a Linux environment via WSL2),
and the containers live behind a virtualization boundary. That boundary is why “host networking” behaves weirdly,
why LAN access is not symmetrical, and why port publishing can feel like it’s aimed at localhost only.

Think in layers:

  • Your physical machine OS (macOS/Windows): has your Wi‑Fi/Ethernet interface, your VPN client, and your firewall.
  • The Docker VM / WSL2: has its own virtual NIC, its own routing table, its own iptables, and its own DNS behavior.
  • Container networks: bridges inside that Linux environment; your containers rarely touch the physical LAN directly.
  • Port publishing shim: Docker Desktop forwards ports from the host OS to the VM to the container.

So when someone says “the container can’t reach the LAN,” your first response should be: “Which layer can’t reach which layer?”

Interesting facts and short history (the stuff that explains today’s pain)

  1. Docker’s original networking model assumed Linux. Early Docker popularized the “bridge + NAT + iptables” pattern because Linux made it easy and portable.
  2. macOS can’t run Linux containers natively. Docker Desktop on macOS has always relied on a Linux VM because containers need Linux kernel features (namespaces, cgroups).
  3. Windows had two eras. First came Hyper-V based Docker Desktop; then WSL2 became the default path for better filesystem and resource behavior, with different networking quirks.
  4. host.docker.internal exists because “the host” is ambiguous. Inside a container, “localhost” is the container; Docker Desktop needed a stable hostname for “the host OS.”
  5. Published ports aren’t just iptables rules on Desktop. On Linux they are; on Desktop they’re often implemented by a user-space proxy/forwarder across the VM boundary.
  6. VPN clients love to rewrite your DNS and routes. They often install a new DNS server, block split DNS, or add a virtual interface with higher priority than Wi‑Fi.
  7. Corporate endpoint security frequently injects a local proxy. This can break container DNS, MITM TLS, or silently divert traffic to “inspection” infrastructure.
  8. ICMP lies to you in virtual networks. “Can’t ping” does not reliably mean “can’t connect,” especially when firewalls block ICMP but allow TCP.

Joke #1: Docker Desktop networking is like an org chart—there’s always one more layer than you think, and it’s never the layer accountable.

Fast diagnosis playbook (check first/second/third)

The fastest way to win is to stop guessing. Diagnose in this order, because it isolates layers with minimal effort.

1) Is it a port publishing problem or a routing/DNS problem?

  • If localhost:PORT works on your machine but LAN clients can’t reach it, you’re likely dealing with host firewall/bind address/VPN route filtering.
  • If containers can’t resolve names or reach any external host, start with DNS and outbound routing from inside the container/VM.

2) Identify where the packet dies (host OS → VM → container)

  • From host OS: can you reach the LAN target?
  • From inside a container: can you reach the same LAN target by IP?
  • From inside a container: can you resolve the name?

3) Verify the actual bind/listen address and the forwarder

  • Is the service listening on 0.0.0.0 inside the container, or only on 127.0.0.1?
  • Is Docker publishing the port on all interfaces or only on localhost?
  • Is the host firewall blocking inbound from the LAN?

4) Check VPN and DNS override behavior early

  • If the problem appears/disappears with VPN, stop treating it as a Docker bug. It’s policy, routes, DNS, or inspection.

5) Only then tweak Docker Desktop settings

  • Changing DNS servers or network ranges can help, but do it with evidence. Otherwise you’ll just create a new mystery.

Practical tasks: commands, outputs, and decisions (12+)

These are the checks I actually run. Each includes: command, example output, what it means, and what decision to make next.
Commands are shown with a generic prompt; adapt the interface names and IPs to your environment.

Task 1: Confirm which Docker context you’re using

cr0x@server:~$ docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock
desktop-linux       Docker Desktop                            unix:///Users/me/.docker/run/docker.sock

Meaning: If you think you’re talking to Desktop but you’re on a remote daemon (or vice versa), every networking assumption will be wrong.
Decision: If the starred context isn’t what you expect, switch it: docker context use desktop-linux.

Task 2: Inspect a container’s IP and network attachment

cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Ports}}'
NAMES          PORTS
web            0.0.0.0:8080->80/tcp
db             5432/tcp
cr0x@server:~$ docker inspect -f '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}} {{.Gateway}}{{end}}' web
/web 172.17.0.2 172.17.0.1

Meaning: The container lives on an internal bridge (here 172.17.0.0/16). That is not your LAN.
Decision: If you’re trying to reach 172.17.0.2 from another laptop on the Wi‑Fi, stop. Publish a port or use a different networking pattern.

Task 3: Check what address your service is actually listening on

cr0x@server:~$ docker exec -it web sh -lc "ss -lntp | head -n 5"
State  Recv-Q Send-Q Local Address:Port  Peer Address:Port Process
LISTEN 0      4096   0.0.0.0:80         0.0.0.0:*     users:(("nginx",pid=1,fd=6))

Meaning: Listening on 0.0.0.0 is good; it accepts traffic from the container network.
If you see 127.0.0.1:80, the port publish will “work” in confusing ways or fail entirely.
Decision: If it’s bound to localhost, fix the app config: bind to 0.0.0.0.

Task 4: Verify published port bindings on the Docker side

cr0x@server:~$ docker port web
80/tcp -> 0.0.0.0:8080

Meaning: Docker believes it published to all interfaces.
Decision: If it shows 127.0.0.1:8080, LAN clients won’t reach it. Re-run with -p 0.0.0.0:8080:80 (or fix your compose file).

Task 5: Confirm the host OS is listening on the expected port

cr0x@server:~$ ss -lntp | grep ':8080'
LISTEN 0      4096      0.0.0.0:8080     0.0.0.0:*    users:(("com.docker.backend",pid=2314,fd=123))

Meaning: On Desktop, you often see Docker’s backend process listening, not the container PID. That’s normal.
Decision: If nothing is listening, your publish didn’t apply, or another process stole the port.

Task 6: Test from the host OS to confirm the forward path works

cr0x@server:~$ curl -sS -D- http://127.0.0.1:8080/ | head
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Sat, 03 Jan 2026 09:12:52 GMT
Content-Type: text/html

Meaning: Host-to-container port forwarding works locally.
Decision: If LAN clients can’t connect, focus on firewall/VPN/bind-to-localhost, not the container app.

Task 7: Test from a LAN peer (simulate with another namespace/host if you can)

cr0x@server:~$ nc -vz 192.168.1.50 8080
Connection to 192.168.1.50 8080 port [tcp/http-alt] succeeded!

Meaning: The port is reachable from the LAN.
Decision: If it fails with “timed out” you likely have firewall/routing issues. If “refused,” something is listening but not accepting on that interface or the forwarder isn’t bound correctly.

Task 8: Check container DNS configuration

cr0x@server:~$ docker exec -it web sh -lc "cat /etc/resolv.conf"
nameserver 192.168.65.5
search localdomain
options ndots:0

Meaning: Docker Desktop often injects a stub resolver IP (example: 192.168.65.5) inside the VM network.
Decision: If this nameserver is unreachable or misbehaving (common with VPNs), override DNS at the daemon/compose level.

Task 9: Test DNS resolution inside the container (don’t guess)

cr0x@server:~$ docker exec -it web sh -lc "getent hosts example.com | head -n 2"
2606:2800:220:1:248:1893:25c8:1946 example.com
93.184.216.34 example.com

Meaning: DNS works well enough to resolve both AAAA and A.
Decision: If it hangs or returns nothing, you have a DNS path problem. Next step: try resolving using a specific server (if you have tools installed) or override resolvers.

Task 10: Test direct IP connectivity to a LAN resource from inside the container

cr0x@server:~$ docker exec -it web sh -lc "nc -vz 192.168.1.10 445"
192.168.1.10 (192.168.1.10:445) open

Meaning: Routing from container → VM → host OS → LAN works for that destination.
Decision: If IP works but name fails, it’s DNS. If neither works, it’s routing/VPN/policy.

Task 11: Check the container’s default route (basic but decisive)

cr0x@server:~$ docker exec -it web sh -lc "ip route"
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 scope link  src 172.17.0.2

Meaning: The container routes to the bridge gateway. The gateway then decides how to reach your LAN/internet.
Decision: If the default route is missing or wrong, you’ve built a custom network setup; back up and test with a vanilla bridge network.

Task 12: Check whether you’re colliding with a corporate/VPN subnet

cr0x@server:~$ ip route | head -n 12
default via 192.168.1.1 dev wlan0
10.0.0.0/8 via 10.8.0.1 dev tun0
172.16.0.0/12 via 10.8.0.1 dev tun0
192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.50

Meaning: If your Docker networks use 172.16.0.0/12 and your VPN also routes 172.16.0.0/12, you’ve created ambiguous routing.
Desktop is especially sensitive to overlap because it’s already NATing.
Decision: Change Docker’s internal subnet ranges to avoid overlap with corporate routes.

Task 13: Inspect Docker networks and their subnets

cr0x@server:~$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
a1b2c3d4e5f6   bridge    bridge    local
f1e2d3c4b5a6   host      host      local
123456789abc   none      null      local
cr0x@server:~$ docker network inspect bridge --format '{{(index .IPAM.Config 0).Subnet}}'
172.17.0.0/16

Meaning: You now know which subnets Docker is consuming.
Decision: If this overlaps with VPN routes or your LAN, move it.

Task 14: Validate that the container can reach the host OS via Docker Desktop’s special name

cr0x@server:~$ docker exec -it web sh -lc "getent hosts host.docker.internal"
192.168.65.2    host.docker.internal

Meaning: The special mapping exists and points at the host-side endpoint Docker provides.
Decision: If this name doesn’t resolve, you’re on an older setup, a custom network mode, or something tampered with DNS inside the container. Use explicit IPs only as a last resort.

LAN access patterns: what works, what lies

There are three common asks:

  • LAN → your containerized service (coworker wants to hit your dev server).
  • Container → LAN resource (container needs to reach NAS, printer, internal API, Kerberos, whatever).
  • Container → host OS (container calls a service running on your laptop).

Pattern A: LAN → container via published ports (the only sane default)

Publish ports on the host OS, not by trying to hand out container IPs.
With Docker Desktop you cannot treat container IPs as routable on the physical LAN. They live behind NAT, inside a VM, behind another NAT if your OS is also doing something clever.

What to do:

  • Bind to all interfaces: -p 0.0.0.0:8080:80 or in Compose "8080:80" plus ensure it doesn’t default to localhost-only publishing.
  • Open the host firewall for that port (and limit scope; don’t expose your dev database to Starbucks Wi‑Fi).
  • If your VPN forbids inbound from LAN while connected, accept reality: test without VPN or use a proper dev environment elsewhere.

Pattern B: container → LAN resources (routing works until it doesn’t)

Containers reaching your LAN usually works out of the box, because Docker Desktop NATs outbound traffic through the host OS.
Then you connect a VPN, and the host OS changes DNS and routes. Suddenly your container can’t resolve or can’t reach subnets that are now “owned” by the VPN.

When it fails, it fails in a few repeatable ways:

  • Subnet overlap: Docker chooses a private range that your VPN routes. Packets disappear into the tunnel.
  • Split DNS mismatch: host resolves internal names via corporate DNS, but containers are stuck on a stub resolver that doesn’t forward split domains correctly.
  • Firewall policy: corporate endpoint denies traffic from “unknown” virtual interfaces.

Pattern C: container → host OS services (use the special names)

Use host.docker.internal. That is what it’s for.
It’s not elegant, but it’s stable across DHCP changes and less fragile than hardcoding 192.168.x.y.

If you’re on Linux (not Desktop) you may not have it; on Desktop you generally do.

Ports: publishing, binding addresses, and why coworkers can’t hit your dev server

Published ports are the currency of “make my container reachable.” Everything else is debt.

Localhost isn’t a moral virtue, it’s a bind address

Two different things get confused constantly:

  • Where the app listens inside the container (127.0.0.1 vs 0.0.0.0).
  • Where Docker binds the published port on the host (127.0.0.1:PORT vs 0.0.0.0:PORT).

If either one is “localhost-only,” LAN clients lose. And you’ll waste time blaming the other layer.

Compose tip: don’t accidentally bind to localhost

Compose supports explicit host IP binding. This is great when you mean it and awful when you don’t.

cr0x@server:~$ cat docker-compose.yml
services:
  web:
    image: nginx:alpine
    ports:
      - "127.0.0.1:8080:80"

Meaning: That service is intentionally reachable only from the host OS.
Decision: If you want LAN access, change it to "8080:80" or "0.0.0.0:8080:80", and then handle firewall scope properly.

When published ports still aren’t reachable from the LAN

If Docker shows 0.0.0.0:8080 but LAN clients can’t connect:

  • Host firewall: macOS Application Firewall, Windows Defender Firewall, third-party endpoint tools.
  • Interface selection: the port may be bound, but the OS may block inbound on Wi‑Fi while allowing it on Ethernet (or vice versa).
  • VPN policy: some clients enforce “block local LAN” to reduce lateral movement risk.
  • NAT hairpin quirks: some networks don’t let you reach your own public IP from inside; that’s not Docker, that’s your router doing its best.

Joke #2: Nothing improves teamwork like telling someone “it works on my machine” and meaning it as a network architecture statement.

DNS fixes: from “it’s flaky” to “it’s deterministic”

DNS is where Docker Desktop weirdness goes to become folklore. The problem is usually not “Docker can’t do DNS.”
The problem is: you now have at least two resolvers (host OS and VM), sometimes three (VPN’s), and they don’t agree on split-horizon rules.

Failure mode 1: container DNS resolves public names but not internal ones

Classic corporate split DNS: git.corp only resolves via internal DNS servers, reachable only on VPN.
Your host OS does the right thing. Your container uses a stub resolver that doesn’t forward the right domains to the right servers.

Fix options, from best to worst:

  1. Configure Docker Desktop DNS to use your internal resolvers when on VPN, and public resolvers when off VPN. This is sometimes a manual toggle because “auto” can be unreliable.
  2. Per-project DNS in Compose:
    • Set dns: to the IPs of resolvers that can answer both internal and external names (often your VPN-provided ones).
  3. Hardcode /etc/hosts inside containers. This is a tactical hack, not a strategy.

Task 15: Override DNS in Compose and verify inside container

cr0x@server:~$ cat docker-compose.yml
services:
  web:
    image: alpine:3.20
    command: ["sleep","infinity"]
    dns:
      - 10.8.0.53
      - 1.1.1.1
cr0x@server:~$ docker compose up -d
[+] Running 1/1
 ✔ Container web-1  Started
cr0x@server:~$ docker exec -it web-1 sh -lc "cat /etc/resolv.conf"
nameserver 10.8.0.53
nameserver 1.1.1.1

Meaning: The container is now using the DNS servers you specified.
Decision: If internal domains now resolve, you’ve proven it’s a DNS path/split DNS issue, not an application issue.

Failure mode 2: DNS works, but only sometimes (timeouts, slow builds, flaky package installs)

Intermittent DNS failures often come from:

  • VPN DNS servers that drop UDP under load or require TCP for large responses.
  • Corporate security agents intercepting DNS and occasionally timing out.
  • MTU/MSS issues on tunneled links (DNS over UDP fragments and then dies quietly).

Task 16: Detect DNS timeouts vs NXDOMAIN inside container

cr0x@server:~$ docker exec -it web-1 sh -lc "time getent hosts pypi.org >/dev/null; echo $?"
real    0m0.042s
user    0m0.000s
sys     0m0.003s
0

Meaning: Fast success.
Decision: If this takes seconds or fails intermittently, prefer changing resolvers (or forcing TCP via a different resolver) over retrying forever in your build scripts.

Failure mode 3: internal service works by IP but not by name (and only on VPN)

That’s split DNS again, but with extra spice: sometimes the VPN pushes a DNS suffix and search domains to the host OS,
but Docker Desktop’s resolver doesn’t inherit them cleanly.

Task 17: Confirm search domains inside container

cr0x@server:~$ docker exec -it web-1 sh -lc "cat /etc/resolv.conf"
nameserver 10.8.0.53
search corp.example
options ndots:0

Meaning: Search domain is present.
Decision: If it’s missing, FQDNs may work while short names fail. Either use FQDNs or configure search domains at the container level.

VPNs, split tunnels, and corporate endpoint “helpfulness”

VPNs cause two broad classes of issues: routing changes and DNS changes. Docker Desktop amplifies both because it’s effectively a nested network.

Routing: when the VPN steals your RFC1918 space

Many corporate networks route large private ranges like 10.0.0.0/8 or 172.16.0.0/12 through the tunnel.
Docker defaults often use 172.17.0.0/16 for the bridge and other 172.x ranges for user-defined networks.

On a pure Linux host, you can usually manage this with custom bridge subnets and iptables. On Desktop, you can still do it, but you must treat it as a first-class configuration.

Task 18: Create a user-defined network on a “safe” subnet

cr0x@server:~$ docker network create --subnet 192.168.240.0/24 devnet
9f8c7b6a5d4e3c2b1a0f
cr0x@server:~$ docker run -d --name web2 --network devnet -p 8081:80 nginx:alpine
b1c2d3e4f5a6
cr0x@server:~$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web2
192.168.240.2

Meaning: You’ve moved the container network away from common corporate routes.
Decision: If VPN-related reachability improves, institutionalize a subnet policy for dev networks.

Endpoint security: the invisible middlebox

Some endpoint tools treat virtualization NICs as “untrusted.” They may block inbound or outbound, or force traffic through a proxy.
Symptoms include: published ports work only when the security agent is paused, DNS becomes slow, or internal services fail TLS due to inspection.

You can’t “SRE” your way out of policy. What you can do is get proof quickly, then escalate with concrete evidence.

Task 19: Prove it’s local firewall/policy with a quick inbound test

cr0x@server:~$ python3 -m http.server 18080 --bind 0.0.0.0
Serving HTTP on 0.0.0.0 port 18080 (http://0.0.0.0:18080/) ...

Meaning: This is not Docker. This is a plain host process.
Decision: If a LAN peer can’t reach this either, stop debugging Docker and fix firewall/VPN “block local network” settings.

Windows + WSL2 specifics (where packets go to retire)

On modern Windows, Docker Desktop often runs its engine inside WSL2. WSL2 has its own virtual network (NAT behind Windows).
That means you can have: container NAT behind Linux, behind WSL2 NAT, behind Windows firewall rules. It’s NAT all the way down.

Typical Windows symptoms

  • Published port reachable from Windows localhost but not from LAN. Usually Windows Defender Firewall inbound rules, or the binding is loopback-only.
  • Containers can’t reach a LAN subnet that Windows can reach. Usually VPN routes are not propagated the way you think into WSL2, or policy blocks WSL interfaces.
  • DNS differs between Windows and WSL2. WSL2 writes its own /etc/resolv.conf; sometimes it points at a Windows-side resolver that can’t see VPN DNS.

Task 20: Check WSL2’s resolv.conf and route table (from inside WSL)

cr0x@server:~$ cat /etc/resolv.conf
nameserver 172.29.96.1
cr0x@server:~$ ip route | head
default via 172.29.96.1 dev eth0
172.29.96.0/20 dev eth0 proto kernel scope link src 172.29.96.100

Meaning: WSL2 is using a Windows-side virtual gateway/resolver.
Decision: If DNS breaks only on VPN, consider configuring WSL2 DNS behavior (static resolv.conf) and aligning Docker’s DNS with the VPN resolvers.

macOS specifics (pf, vmnet, and the illusion of localhost)

On macOS, Docker Desktop runs a Linux VM and forwards ports back to macOS.
Your containers are not first-class citizens on your physical LAN. They’re guests behind a very polite concierge.

What macOS users trip over

  • “It works on localhost but not from my phone.” Usually macOS firewall or port published to loopback only.
  • DNS changes when Wi‑Fi changes networks. The host resolver changes quickly; the VM sometimes lags or caches oddness.
  • Corporate VPN blocks local subnet access. Your phone can’t reach your laptop while the VPN is connected, regardless of Docker.

Task 21: Confirm the host OS has the right IP and interface for LAN testing

cr0x@server:~$ ip addr show | sed -n '1,25p'
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
2: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 192.168.1.50/24 brd 192.168.1.255 scope global dynamic wlan0

Meaning: Your LAN IP is 192.168.1.50.
Decision: This is the address a LAN peer should use to hit your published port. If peers are using an old IP, they’re testing the wrong machine.

Common mistakes: symptom → root cause → fix

1) Symptom: localhost:8080 works, coworker can’t reach 192.168.x.y:8080

  • Root cause: Port published to 127.0.0.1 only, or host firewall blocks inbound.
  • Fix: Publish on all interfaces (-p 0.0.0.0:8080:80), then allow inbound for that port on the host firewall for the correct network profile.

2) Symptom: container can reach internet but not 192.168.1.10 (LAN NAS)

  • Root cause: VPN “block local LAN” policy or routes pushing LAN subnets into the tunnel.
  • Fix: Test with VPN disconnected; if that fixes it, request split-tunnel exceptions or run the workload in a proper environment (remote dev VM, staging). Don’t fight policy with hacks.

3) Symptom: container can reach LAN IPs but internal hostnames fail

  • Root cause: Split DNS not propagated into Docker Desktop; containers using a stub resolver that can’t see internal zones.
  • Fix: Configure container/project DNS (dns: in Compose) to include corporate DNS servers reachable on VPN; verify with getent hosts.

4) Symptom: DNS flaps during builds (apt/npm/pip failing randomly)

  • Root cause: Unreliable UDP DNS across VPN, MTU issues, endpoint interception.
  • Fix: Prefer stable resolvers; use two resolvers (internal + public) where policy allows; reduce fragmentation risk by addressing MTU at the VPN layer if you control it.

5) Symptom: service is published, but you get “connection refused” from LAN

  • Root cause: App is listening only on container localhost, or wrong container port published.
  • Fix: Check ss -lntp inside container; fix bind address; verify docker port and container port mapping.

6) Symptom: can’t connect to host.docker.internal from container

  • Root cause: DNS override removed the special name, or using a network mode where Desktop doesn’t inject it.
  • Fix: Avoid overriding DNS blindly; if you must, ensure the special name still resolves (or add an explicit host entry via extra_hosts as a last resort).

7) Symptom: everything breaks only on one Wi‑Fi network

  • Root cause: That network isolates clients (AP isolation) or blocks inbound connections between devices.
  • Fix: Use a proper network (or wired), or run the service behind a reverse tunnel; don’t assume “same Wi‑Fi” means “mutually reachable.”

Three corporate mini-stories (realistic, anonymized, painful)

Mini-story 1: The incident caused by a wrong assumption

A product team built a demo environment on laptops for an on-site customer workshop. The plan was simple: run a few services in Docker Desktop, publish ports,
and have attendees connect over the hotel Wi‑Fi. Everyone had done “-p 8080:8080” a thousand times.
The wrong assumption was that Docker Desktop behaves like a Linux host on a flat LAN.

The morning of the workshop, half the attendees couldn’t connect. The services were up. Local curl worked. The presenters could reach each other sometimes.
People started rebooting like it was 1998. The networking issue wasn’t Docker; it was the hotel Wi‑Fi doing client isolation—devices could reach the internet but not each other.

The second wrong assumption arrived immediately after: “Let’s just use container IPs and avoid port mapping.”
They tried to hand out 172.17.x.x addresses visible inside the Docker VM, which of course were not reachable from other laptops.
That led to ten minutes of confident nonsense and one deeply regretted whiteboard diagram.

The fix was boring: create a local hotspot on a phone that allowed peer-to-peer traffic,
and explicitly publish required ports on 0.0.0.0 with a quick firewall allow rule.
The services were fine. The assumption about “same network” was the actual outage.

Mini-story 2: The optimization that backfired

A platform team wanted faster CI runs on developer machines. They noticed frequent DNS lookups during builds and decided to “optimize” by forcing Docker containers to use a public DNS resolver.
It looked great in a coffee-shop test: faster resolves, fewer timeouts, nice graphs.

Then the first engineer tried to build while on VPN. Internal package registries were only reachable via corporate DNS and internal routes.
Suddenly, builds failed with “host not found” even though the host OS resolved fine. The workaround became “disconnect VPN,”
which is a great way to create the next incident.

The situation got worse because some internal names resolved publicly to placeholder IPs (for security reasons), so “DNS succeeded” but connections went to a blackhole.
Debugging was brutal: you’d see A records, the app would time out, and everyone blamed TLS, proxies, and Docker in random order.

The eventual fix was to stop optimizing DNS globally. They moved to per-project DNS settings:
internal resolvers first when on VPN, public resolvers only when off VPN.
They also documented how to test resolution inside containers, because “it resolves on my host” is not a data point in a nested network.

Mini-story 3: The boring but correct practice that saved the day

A security-sensitive service used Docker Desktop for local integration testing. It needed to call an internal API and also accept inbound webhooks from a test harness on another machine in the same office.
The team had a habit I respect: before changing settings, they captured “known-good” network evidence—routes, DNS config, port bindings—when it worked.

One Monday, everything broke after an OS update. Containers couldn’t resolve internal names. Webhooks from a LAN machine stopped arriving.
Instead of guessing, they compared current state to the baseline: published ports were now bound to localhost only, and the DNS stub inside containers pointed at a new VM-side resolver IP that wasn’t forwarding split DNS.

They fixed the port binding in Compose, then pinned container DNS to the internal resolvers while on VPN.
Because they had the baseline, they could show the endpoint security team exactly what changed and why.
The incident didn’t turn into a week-long blame festival.

That practice—capture baseline, diff when broken—is as exciting as watching paint dry.
It also works.

Checklists / step-by-step plan (boring on purpose)

Checklist 1: Expose a Docker Desktop service to your LAN reliably

  1. Ensure the app listens on 0.0.0.0 inside the container (ss -lntp).
  2. Publish the port on all host interfaces: -p 0.0.0.0:8080:80 (or Compose "8080:80").
  3. Confirm Docker sees the mapping: docker port CONTAINER.
  4. Confirm the host OS is listening on that port: ss -lntp | grep :8080.
  5. Test locally: curl http://127.0.0.1:8080.
  6. Test from a LAN peer: nc -vz HOST_LAN_IP 8080.
  7. If LAN test fails, run a non-Docker listener (python3 -m http.server) to isolate firewall/VPN from Docker issues.

Checklist 2: Make containers reach internal LAN resources (NAS, internal APIs)

  1. From host OS, verify the target is reachable by IP.
  2. From inside the container, test IP connectivity (nc -vz or curl).
  3. If IP fails only on VPN, check route overlap (ip route) and VPN policies (“block local LAN”).
  4. If IP works but name fails, check /etc/resolv.conf and resolve with getent hosts.
  5. Override DNS per-project using Compose dns: if needed.
  6. Avoid subnet overlap: move Docker networks to a range your VPN doesn’t route.

Checklist 3: Stabilize DNS for dev builds (pip/npm/apt stop flaking)

  1. Measure resolution time inside container with time getent hosts.
  2. Inspect current resolvers in /etc/resolv.conf.
  3. If on VPN, prefer the VPN-provided internal resolvers (and add a public fallback only if permitted).
  4. Don’t hardcode public DNS globally across all projects; you’ll break split DNS workflows.
  5. Re-test inside container after changes; don’t trust host OS results.

FAQ

1) Why can’t I just use the container IP from another machine on my LAN?

Because on Docker Desktop that IP is on an internal bridge inside a Linux VM (or WSL2 environment). Your LAN doesn’t route to it. Publish ports instead.

2) Why does -p 8080:80 work locally but not from my phone?

Usually either the port is bound to localhost only (explicitly or via Compose), or your host firewall/VPN blocks inbound connections from the LAN.

3) What’s the difference between 127.0.0.1 and 0.0.0.0 in this context?

127.0.0.1 means “only accept connections from this same network stack.” 0.0.0.0 means “listen on all interfaces.”
You need 0.0.0.0 if you expect other devices to connect.

4) Is --network host the fix for Docker Desktop networking?

No. On Docker Desktop, “host network” is not the same as Linux host networking and often won’t give you what you want. Default to bridge + published ports.

5) Why does DNS work on my host but not inside containers?

The container may be using a different resolver path (a stub inside the VM), and it may not inherit your VPN’s split DNS configuration.
Verify with cat /etc/resolv.conf and getent hosts inside the container, then override DNS per-project if needed.

6) Should I set Docker Desktop DNS to a public resolver to “fix everything”?

Only if you never need internal DNS. Public resolvers can break corporate domains, internal registries, and split-horizon setups.
Use project-specific DNS or conditional behavior tied to VPN state.

7) My container can’t reach a LAN device only when the VPN is connected. Is Docker at fault?

Almost always no. VPN clients can route private subnets through the tunnel or block local LAN access.
Prove it by testing the same connection from the host OS and by disconnecting VPN as a control.

8) What’s the most reliable way for a container to call a service on my laptop?

Use host.docker.internal and keep it consistent across environments. Avoid hardcoded host IP addresses that change with Wi‑Fi networks.

9) How do I know whether the problem is firewall vs Docker port mapping?

Run a non-Docker listener on the host (like python3 -m http.server). If the LAN can’t reach that, Docker isn’t the problem.

10) What’s a good principle for Desktop networking sanity?

Treat Docker Desktop as “containers behind a VM behind your OS.” Publish ports, avoid subnet overlap, and validate DNS from inside the container.

Conclusion: next steps you can do today

Docker Desktop networking stops being weird when you stop expecting it to be Linux host networking. It’s a VM boundary with a forwarding layer.
Once you accept that, most issues collapse into three buckets: bind addresses, firewall/VPN policy, and DNS/resolver drift.

Practical next steps:

  1. Pick one test service, publish it on 0.0.0.0, and verify LAN reachability end-to-end using ss, curl, and nc.
  2. Capture a baseline when things work: docker port, container /etc/resolv.conf, and host routing table.
  3. If you use a VPN, stop letting Docker networks overlap with corporate routes. Standardize a “safe” subnet range for dev networks.
  4. Make DNS a per-project configuration when internal names matter. Global “fixes” are how you create cross-team breakage.

One paraphrased idea from Werner Vogels (Amazon CTO): “Everything fails; design your systems—and your operations—to absorb that failure.”
Docker Desktop networking isn’t special. It’s just failure with extra layers.

← Previous
Ubuntu 24.04: “Failed to get D-Bus connection” — fix broken sessions and services (case #48)
Next →
SLI/CrossFire: Why Multi-GPU Was a Dream—and Why It Died

Leave a comment