Docker macvlan: Can’t Reach the Container — Fix the Classic Routing Trap

Was this helpful?

You built a macvlan network so your container can live like a “real” machine on the LAN. It gets its own IP, shows up in DHCP logs,
and other hosts can reach it. Then you try to curl it from the Docker host and… nothing. No ping, no TCP connect, no joy.

This is the classic macvlan routing trap: the host can’t reach its own macvlan children through the same physical interface by default.
It looks like a firewall problem. It smells like an ARP problem. It’s neither—until you make it one by guessing.

What you’re seeing (and why it’s so confusing)

The failure mode is usually very specific:

  • The container has an IP on your LAN (say 192.168.10.50).
  • Other machines on the LAN can ping/curl it just fine.
  • The Docker host itself cannot ping/curl it.
  • Sometimes, the container also cannot reach the host on the host’s LAN IP.

If you’re coming from bridge networks (docker0) you expect the host to be able to talk to everything.
Macvlan violates that expectation on purpose. It’s not “Docker being weird”; it’s the Linux macvlan driver doing exactly what it was
designed to do: create multiple virtual interfaces with unique MAC addresses, but with specific traffic isolation properties.

The trap: you put the container “directly” on the LAN, but you did not put the host on that same L2 segment in a way that allows host↔child communication.
The host’s physical NIC is the parent, and the macvlan interfaces are children. In common macvlan modes, Linux won’t hairpin traffic
from the parent to its own macvlan children.

Joke #1: macvlan is like giving your containers their own front doors—then realizing the landlord locked the internal hallway.

Macvlan behavior in plain English: the host is not “on” that network

When you create a macvlan network in Docker, Docker asks the kernel to create macvlan interfaces tied to a parent interface
(for example, eth0 or enp3s0). Those macvlan interfaces live in the container namespaces, each with its own MAC address.
To the switch, they look like separate machines plugged into the same port.

The key nuance is local delivery: a Linux host won’t necessarily route/bridge packets from its parent interface to its own macvlan children.
So the host tries to reach 192.168.10.50, ARPs out eth0, and expects the reply. But the kernel’s macvlan rules may prevent
that traffic from looping back into the macvlan interface that belongs to the container. It’s a deliberate limitation: macvlan is primarily
about giving endpoints L2 identities, not about making the parent talk to them.

That’s why you see the weird asymmetry:
other LAN hosts can reach the container because their traffic arrives from the wire and gets delivered into the macvlan interface.
But the host’s own traffic originates “above” the parent interface and is subject to local filtering semantics.

Macvlan modes and the one Docker usually implies

Linux supports multiple macvlan modes: bridge, private, vepa, passthru.
Docker’s macvlan driver defaults to something akin to bridge behavior for the macvlan endpoints, but the host↔endpoint limitation remains the classic gotcha.

If you want host↔container connectivity, you need to explicitly add a host-side interface on the macvlan network (a “shim” macvlan on the host),
and route the container subnet via it. Or you switch to ipvlan L3 or use a different networking model.

Interesting facts and history that actually matter

  1. Macvlan is a Linux kernel feature, not a Docker invention; Docker is just calling into it. That matters when you debug: think kernel, not “Docker magic.”
  2. Macvlan was popularized by virtualization and telco workloads where many endpoints need distinct L2 identities on a shared uplink.
  3. “One switch port, many MACs” can trigger enterprise switch security features like port-security MAC limits. This is why macvlan works in the lab and fails spectacularly in a corporate closet.
  4. The host↔macvlan-child communication limitation is well-known and longstanding; it’s not a regression. It’s a side effect of how macvlan hooks into the RX/TX path.
  5. Ipvlan was added later as an alternative that can reduce MAC sprawl: multiple endpoints share the parent’s MAC, shifting identity to L3. It’s sometimes the grown-up choice.
  6. Hairpin traffic is a separate concept from macvlan’s host isolation. You can enable hairpin on bridges; it doesn’t automatically “fix” macvlan host reachability.
  7. Promiscuous mode is often required on hypervisors (VMware, some cloud VPCs) to pass multiple MAC addresses through a virtual NIC. Without it, macvlan silently drops packets and you blame Docker.
  8. ARP flux and asymmetric routing show up more often with macvlan because you now have multiple addresses on a single physical segment and the kernel must pick source IPs and routes carefully.

Fast diagnosis playbook (check first/second/third)

The goal is to answer three questions quickly:
(1) Is the container actually on the LAN? (2) Is the host↔container path blocked by the macvlan trap?
(3) Is something else (VLAN, switch security, hypervisor filtering, firewall) making it worse?

First: confirm the symptom is the classic trap

  • From another LAN host, ping/curl the container IP.
  • From the Docker host, ping/curl the container IP.
  • If it works from other hosts but not from the Docker host, you’re probably in the trap.

Second: validate L2/L3 basics (don’t “fix” what isn’t broken)

  • Confirm the container IP, subnet mask, and gateway are correct.
  • Confirm the host route table doesn’t already contain a conflicting route.
  • Check ARP behavior on the host for the container IP (is it incomplete? wrong MAC?).

Third: check the environment constraints

  • Are you on a VM that blocks multiple MACs? If yes, enable promisc / forged transmits / MAC spoofing as appropriate.
  • Are you on a managed switch with port-security or MAC address limits? If yes, you may need to raise the limit or avoid macvlan.
  • Are you using VLAN trunks? Verify tagging is correct and the parent interface is the right VLAN sub-interface.

Fourth: pick a fix pattern

  • If you need host↔container traffic: add a host-side macvlan interface and a route.
  • If you need fewer MACs: consider ipvlan L3 and route instead.
  • If you need simple service exposure: consider bridge networking + port publishing, and move on with your life.

Practical tasks: commands, outputs, and decisions (12+)

These are the tasks I actually run when someone says “macvlan is broken.” Each one includes a representative output and what decision it drives.
Adjust interface names, subnets, and container IDs to your environment.

Task 1: Confirm the Docker network is really macvlan and note its parent

cr0x@server:~$ docker network ls
NETWORK ID     NAME           DRIVER    SCOPE
a1b2c3d4e5f6   bridge         bridge    local
d4e5f6a1b2c3   host           host      local
e5f6a1b2c3d4   none           null      local
f6a1b2c3d4e5   lan-macvlan    macvlan   local
cr0x@server:~$ docker network inspect lan-macvlan | sed -n '1,120p'
[
  {
    "Name": "lan-macvlan",
    "Id": "f6a1b2c3d4e5...",
    "Driver": "macvlan",
    "Options": {
      "parent": "enp3s0"
    },
    "IPAM": {
      "Config": [
        {
          "Subnet": "192.168.10.0/24",
          "Gateway": "192.168.10.1"
        }
      ]
    }
  }
]

Decision: if Driver isn’t macvlan or parent is wrong (wrong NIC, wrong VLAN sub-interface),
stop and fix that first. Wrong parent = debugging theater.

Task 2: Confirm the container’s IP and interface inside the namespace

cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.ID}}\t{{.Networks}}'
NAMES         ID            NETWORKS
dns01         7c1d2a9f0b11  lan-macvlan
cr0x@server:~$ docker exec -it dns01 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
2: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:c0:a8:0a:32 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.50/24 brd 192.168.10.255 scope global eth0

Decision: if the container has no IP, or it’s not in the expected subnet, your problem is IPAM/DHCP/static config, not routing trap.

Task 3: Test reachability from a different LAN host (control test)

cr0x@server:~$ ping -c 2 192.168.10.50
PING 192.168.10.50 (192.168.10.50) 56(84) bytes of data.
From 192.168.10.20 icmp_seq=1 Destination Host Unreachable
From 192.168.10.20 icmp_seq=2 Destination Host Unreachable

--- 192.168.10.50 ping statistics ---
2 packets transmitted, 0 received, +2 errors, 100% packet loss

The above output represents the “bad” result from the host itself. Run the same from another LAN box if you have it.

Decision: if nobody on the LAN can reach it, you likely have switch/hypervisor MAC filtering, wrong VLAN, or container firewall.
If others can reach it but the host can’t, keep going: classic trap.

Task 4: Check the host route table for the container subnet

cr0x@server:~$ ip route show
default via 192.168.10.1 dev enp3s0 proto dhcp src 192.168.10.20 metric 100
192.168.10.0/24 dev enp3s0 proto kernel scope link src 192.168.10.20 metric 100

Decision: if the route to 192.168.10.0/24 points to enp3s0, the host will ARP on the parent interface.
That’s normal. It also sets up the trap: the host thinks it can directly reach the container on-link, but local delivery to macvlan children is blocked.

Task 5: Inspect ARP/neighbor state when pinging the container

cr0x@server:~$ ip neigh show 192.168.10.50
192.168.10.50 dev enp3s0 INCOMPLETE

Decision: INCOMPLETE suggests ARP requests aren’t being answered in a way the host accepts.
In the macvlan trap, the ARP reply may never be delivered to the host stack.

Task 6: Capture ARP on the host parent interface while you ping

cr0x@server:~$ sudo tcpdump -ni enp3s0 arp and host 192.168.10.50
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on enp3s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
12:10:01.123456 ARP, Request who-has 192.168.10.50 tell 192.168.10.20, length 28
12:10:02.125001 ARP, Request who-has 192.168.10.50 tell 192.168.10.20, length 28

Decision: if you see requests but no replies, either replies aren’t being emitted (container not seeing it) or are being filtered
by the host/hypervisor/switch path.

Task 7: Capture ARP inside the container while the host pings it

cr0x@server:~$ docker exec -it dns01 tcpdump -ni eth0 arp and host 192.168.10.20
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
12:10:01.123789 ARP, Request who-has 192.168.10.50 tell 192.168.10.20, length 28

Decision: if the container sees the ARP request, the L2 path is fine. If the host still shows INCOMPLETE,
you’re staring at the host↔macvlan limitation or local firewalling.

Task 8: Verify the host isn’t silently blocking with nftables/iptables

cr0x@server:~$ sudo nft list ruleset | sed -n '1,120p'
table inet filter {
  chain input {
    type filter hook input priority 0; policy accept;
  }
  chain forward {
    type filter hook forward priority 0; policy drop;
    ct state established,related accept
    iifname "docker0" oifname "docker0" accept
  }
}

Decision: if your forward policy is drop and you rely on forwarding between host interfaces,
you’ll need explicit rules. Note: the classic macvlan host trap persists even with permissive firewall, but a strict firewall can stack extra failures.

Task 9: Confirm Docker didn’t sneak in conflicting iptables rules (legacy setups)

cr0x@server:~$ sudo iptables -S | sed -n '1,80p'
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER

Decision: if FORWARD is DROP, understand what’s supposed to be forwarded.
For host↔macvlan, we usually solve with a host-side macvlan interface and routing, not forwarding through docker0.

Task 10: Create a host-side macvlan interface (“shim”) and assign an IP

This is the core fix for the routing trap. We create a macvlan interface on the host, attached to the same parent,
give it an IP on the same subnet (or a dedicated /32 plus route), and then route to the container IPs via that shim.

cr0x@server:~$ sudo ip link add macvlan0 link enp3s0 type macvlan mode bridge
cr0x@server:~$ sudo ip addr add 192.168.10.254/24 dev macvlan0
cr0x@server:~$ sudo ip link set macvlan0 up
cr0x@server:~$ ip addr show macvlan0
20: macvlan0@enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 8a:1c:2e:11:22:33 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.254/24 scope global macvlan0

Decision: once this interface is up, the host has an L2 presence that can talk to macvlan children.
If your LAN already uses .254 for something, pick a free IP. Don’t steal the router’s address unless you enjoy late nights.

Task 11: Add a specific route (or policy route) so traffic goes via the shim

If the container IPs are within the same subnet as the host (common), the host will still consider them “on-link” via enp3s0.
We need to nudge routing so that the container IPs are reached through macvlan0 instead.

cr0x@server:~$ sudo ip route add 192.168.10.50/32 dev macvlan0
cr0x@server:~$ ip route get 192.168.10.50
192.168.10.50 dev macvlan0 src 192.168.10.254 uid 1000
    cache

Decision: if ip route get shows dev macvlan0, your host will now send traffic to the container through the shim.
For multiple containers, route a whole range or use an IPAM range dedicated to macvlan containers so you can route a prefix, not dozens of /32s.

Task 12: Validate host↔container connectivity after the shim

cr0x@server:~$ ping -c 2 192.168.10.50
PING 192.168.10.50 (192.168.10.50) 56(84) bytes of data.
64 bytes from 192.168.10.50: icmp_seq=1 ttl=64 time=0.451 ms
64 bytes from 192.168.10.50: icmp_seq=2 ttl=64 time=0.389 ms

--- 192.168.10.50 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.389/0.420/0.451/0.031 ms

Decision: if ping works, TCP should work. If ping works but TCP doesn’t, now it’s time to check container service bind addresses and firewalling.

Task 13: Confirm the container can reach the host via the shim IP

cr0x@server:~$ docker exec -it dns01 ping -c 2 192.168.10.254
PING 192.168.10.254 (192.168.10.254) 56(84) bytes of data.
64 bytes from 192.168.10.254: icmp_seq=1 ttl=64 time=0.312 ms
64 bytes from 192.168.10.254: icmp_seq=2 ttl=64 time=0.298 ms

--- 192.168.10.254 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.298/0.305/0.312/0.007 ms

Decision: tell apps on the host to talk to the container via container IP, and tell containers to talk back to the host via 192.168.10.254.
It’s not pretty. It is reliable.

Task 14: Check for MAC filtering or promisc problems (VM/hypervisor clue)

cr0x@server:~$ ip -d link show enp3s0 | sed -n '1,40p'
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 3c:52:82:aa:bb:cc brd ff:ff:ff:ff:ff:ff
    promiscuity 0

Decision: on bare metal, promiscuity 0 can still be fine because the NIC receives frames destined to the MACs it’s told about.
In a VM, if macvlan frames are filtered upstream, you’ll see “works sometimes” behavior. Then the fix is outside Linux: enable MAC spoofing/forged transmits/promisc on the vSwitch/port group.

Task 15: Verify VLAN parent interface if you’re trunking

cr0x@server:~$ ip link show | egrep 'enp3s0|enp3s0\.'
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
15: enp3s0.30@enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

Decision: if your containers belong on VLAN 30, the Docker macvlan parent should be enp3s0.30, not enp3s0.
Wrong VLAN equals “unreachable,” and you’ll waste time blaming macvlan.

Fix patterns: pick the least-bad option

There isn’t one universal fix because macvlan is usually chosen for a reason: you want L2 adjacency, separate IPs, and the ability for other LAN devices to treat containers as first-class citizens.
But you also want the host to manage them. Those goals are slightly at odds.

Pattern A (recommended): Host-side macvlan shim + route

This is the standard correction for the routing trap. It makes the host a peer on the macvlan segment in a way the kernel will accept.
It’s explicit. It’s observable. It’s reversible.

How to do it well:

  • Allocate a dedicated IP for the shim (e.g., 192.168.10.254).
  • Use a dedicated IP range for containers (e.g., 192.168.10.128/25) so you can route a prefix to macvlan0 instead of adding many /32 routes.
  • Persist the config (systemd-networkd, NetworkManager, or a boot script). Ad-hoc ip link add fixes disappear on reboot.

Pattern B: Use ipvlan L3 instead of macvlan

If your core need is “containers have IPs on the LAN and can be reached,” ipvlan L3 can be cleaner.
It reduces MAC address proliferation because endpoints can share the parent MAC, and routing is explicit at L3.

The trade: you now care more about routing and less about L2 broadcast semantics. Some discovery protocols that rely on L2 broadcasts
won’t behave the same. In exchange, you avoid switch port-security drama and hypervisor MAC spoofing settings.

Pattern C: Don’t use macvlan; use bridge + published ports

Sometimes the right answer is: stop trying to make containers look like physical machines. If you’re just exposing HTTP, DNS, or a few TCP ports,
bridge networking with port publishing is simpler and less likely to anger your network team.

Macvlan is a tool. It’s not a personality.

Pattern D: Put the host itself on a VLAN sub-interface and keep containers elsewhere

In some shops, the cleanest way is to make the host “live” on a management VLAN and put macvlan containers on a service VLAN via a trunk.
That way you can explicitly route between them and treat the host as just another router endpoint.

This is often how people get macvlan working on multi-tenant systems without creating a weird half-attached host.

Pattern E: Use a dedicated NIC for macvlan workloads

If you have the hardware and the port, dedicating a physical NIC to macvlan can reduce conflicts with the host’s own LAN identity.
It also makes failure domains cleaner: you can bounce the macvlan NIC without dropping SSH to the host.

Joke #2: when macvlan breaks in production, it’s never “the network,” until it is—and then it’s always your change ticket.

Common mistakes: symptom → root cause → fix

1) Host can’t reach container, but other LAN hosts can

Symptom: LAN works, host fails (ping/curl from host times out).

Root cause: classic macvlan host↔child isolation on the parent interface.

Fix: create a host-side macvlan shim interface and add routes so container IPs go via the shim.

2) Nobody can reach the container, container can’t reach anything

Symptom: container has IP, but it’s dead on the network.

Root cause: upstream filtering of multiple MAC addresses (hypervisor settings, switch port-security) or wrong VLAN parent.

Fix: enable MAC spoofing/promisc/forged transmits on the hypervisor/vSwitch; increase switch MAC limit; ensure parent is eth0.VLAN when trunking.

3) Container reachable until you deploy a second one, then flakiness

Symptom: first container works; adding more causes intermittent ARP or random reachability.

Root cause: switch port-security MAC limit, CAM table churn, or duplicate IPs from sloppy static assignments.

Fix: check switch port-security; allocate IP ranges properly; use Docker IPAM with a constrained range; consider ipvlan to reduce MAC count.

4) Host reachability “fixed,” but only for ping

Symptom: ping works after shim; TCP connect fails.

Root cause: service binding to localhost, container firewall, or host firewall blocking specific ports.

Fix: validate ss -lntp inside container; confirm service binds to 0.0.0.0 or container IP; adjust nftables/iptables rules.

5) Container can’t reach host services on the host’s LAN IP

Symptom: container can reach the internet, but not 192.168.10.20 (host).

Root cause: same isolation, just reversed: container-to-host traffic aimed at the parent’s IP hits the host stack in a way that may not return properly.

Fix: have containers target the host’s shim IP (e.g., 192.168.10.254), or design with separate VLANs and routing.

6) Packets show up in tcpdump, but applications still can’t connect

Symptom: tcpdump sees SYNs arriving; app times out.

Root cause: asymmetric routing or rp_filter dropping replies because the kernel thinks the return path is “wrong.”

Fix: check sysctl net.ipv4.conf.*.rp_filter; consider policy routing or ensure the route to container IPs is via macvlan shim.

7) Docker network created with gateway that isn’t actually reachable

Symptom: container can be reached on-link but can’t reach outside the subnet.

Root cause: wrong gateway (typo, wrong VLAN) or firewall on the gateway blocking that container range.

Fix: validate container default route; validate gateway ARP; coordinate with network team for ACLs.

Three corporate-world mini-stories from the macvlan trenches

Mini-story 1: The incident caused by a wrong assumption

A mid-sized company wanted to run a handful of network services in containers: internal DNS, an NTP relay, and a couple of vendor appliances
that assumed “real” IPs. Someone proposed macvlan: “It’ll be clean. Containers get IPs, no port mapping, everything looks like a normal host.”
It passed the pilot with one container. Green checkmarks all around.

The incident started after a maintenance reboot. Monitoring screamed that DNS was down—but only from the Docker host itself.
Other servers could still resolve. The on-call engineer did the normal thing: restarted the container, checked the logs, then checked the firewall.
Nobody suspected the real issue because they assumed the host is always able to reach workloads it runs. That assumption is usually correct, right up until you choose macvlan.

They escalated to the network team (because of course), who saw ARP from the host and no replies. The container did see ARP requests, but the host never learned the neighbor entry.
Everyone stared at captures for an hour, convinced there was a switch bug. The “bug” was local: the host’s parent interface wasn’t allowed to talk to its macvlan child.

The fix was a host-side macvlan shim plus a /32 route for the DNS container IP. DNS checks from the host turned green immediately.
The lesson stuck: macvlan is not broken, it’s just opinionated. If you don’t know its opinion, it will be expressed at 3 a.m.

Mini-story 2: The optimization that backfired

Another org ran a fleet of Docker hosts on a virtualized platform. They hit a limit: too many MAC addresses learned on a top-of-rack switch port.
Someone proposed “an optimization”: keep macvlan but aggressively recycle containers and IPs to reduce steady-state MAC count. On paper, this looks clever.
In reality, it’s how you teach the network to hate you.

As containers churned, the switch learned and aged MAC entries constantly. Some hypervisor layers also cached MAC filters.
The result wasn’t a clean failure. It was the worst kind: intermittent reachability.
A container would be reachable from some subnets but not others. ARP tables differed between hosts. TCP handshakes would hang halfway.

The incident review revealed that the “optimization” increased churn in the exact place you want stability: L2 identity and neighbor discovery.
The network team wasn’t amused, and they shouldn’t have been. L2 is happy when it’s boring.

The eventual correction was to migrate the workload to ipvlan L3 on a routed segment and keep container lifetimes stable.
They also carved out a dedicated IP range and stopped recycling addresses aggressively. The performance “optimization” wasn’t faster; it was just noisier.

Mini-story 3: The boring but correct practice that saved the day

A financial services shop had a hard rule: every non-bridge network used a dedicated address range, documented in an internal “IPAM lite” spreadsheet.
Not glamorous. Not cutting-edge. Very effective.

When they introduced macvlan for a legacy service that needed L2 adjacency, they allocated a contiguous /27 for container IPs and reserved one address for the host shim.
They created the shim interface via systemd-networkd, not a post-it-note shell script. They also wrote a one-page runbook: “If host can’t reach container, check route to /27 via macvlan0.”

Months later, a kernel update and a Docker upgrade happened during routine patching. A junior engineer noticed monitoring checks from the host started failing.
They didn’t “try things.” They followed the runbook: verified the shim interface existed, verified the route, verified ARP. One reboot had dropped the interface because a config file wasn’t enabled.
They fixed the enablement, reloaded network config, and the service recovered quickly.

Nothing heroic happened. That’s the point. The practice that saved the day was boring: dedicated ranges, persistent config, and a runbook that assumes humans are tired.

Checklists / step-by-step plan

Checklist: before you choose macvlan in production

  • Confirm you actually need L2 identities. If you only need inbound TCP/UDP, bridge + published ports is usually better.
  • Ask the network question early: can this port accept multiple MACs? Any port-security or MAC limits? Any NAC features?
  • Decide where IPs come from: static IPAM range or DHCP (Docker macvlan often uses static allocation via Docker IPAM).
  • Reserve a host shim IP and document it.
  • Plan a dedicated container range so you can route a prefix to the shim.
  • Confirm VLAN design: if you trunk, create VLAN sub-interfaces and use them as macvlan parents.

Step-by-step: implement macvlan with host access (the sane way)

  1. Create the Docker macvlan network with a defined subnet and (ideally) a constrained IP range.

    cr0x@server:~$ docker network create -d macvlan \
      --subnet=192.168.10.0/24 --gateway=192.168.10.1 \
      -o parent=enp3s0 lan-macvlan
    f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5

    Decision: if this fails, you likely don’t have permissions, the parent doesn’t exist, or NetworkManager is fighting you.

  2. Run a container and assign an IP (or let Docker pick from the pool).

    cr0x@server:~$ docker run -d --name dns01 --network lan-macvlan --ip 192.168.10.50 alpine sleep 1d
    7c1d2a9f0b11b7f9a3b6c2d1e0f9a8b7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2

    Decision: if Docker refuses the IP, you have conflicts or the IP is outside the subnet.

  3. Create the host shim interface and assign an IP.

    cr0x@server:~$ sudo ip link add macvlan0 link enp3s0 type macvlan mode bridge
    cr0x@server:~$ sudo ip addr add 192.168.10.254/24 dev macvlan0
    cr0x@server:~$ sudo ip link set macvlan0 up
    cr0x@server:~$ ip -br addr show macvlan0
    macvlan0@enp3s0     UP             192.168.10.254/24

    Decision: if the interface won’t come up, check for parent state and driver restrictions.

  4. Add routes for container IPs (preferably a prefix).

    cr0x@server:~$ sudo ip route add 192.168.10.128/25 dev macvlan0
    cr0x@server:~$ ip route get 192.168.10.50
    192.168.10.50 dev enp3s0 src 192.168.10.20 uid 1000
        cache

    The output above shows the route still goes via enp3s0 because 192.168.10.50 is not inside 192.168.10.128/25.
    Decision: align your container IP range with your route. Don’t route the wrong half of the subnet and call it “networking.”

  5. Route the correct range or add /32 routes for specific containers.

    cr0x@server:~$ sudo ip route add 192.168.10.50/32 dev macvlan0
    cr0x@server:~$ ip route get 192.168.10.50
    192.168.10.50 dev macvlan0 src 192.168.10.254 uid 1000
        cache

    Decision: if routing is correct, test application connectivity. If it still fails, you are no longer in the “macvlan trap” bucket.

  6. Persist the shim and routes using your host’s network management system.

    Don’t leave it ephemeral. Reboots are inevitable, like meetings about why reboots happened.

One quote you should keep on your mental dashboard

“Hope is not a strategy.” — Vince Lombardi (often used in ops/reliability circles)

FAQ

1) Why can other LAN hosts reach my macvlan container, but the Docker host can’t?

Because macvlan typically prevents the parent interface from talking directly to its macvlan children. Other hosts’ traffic arrives from the wire and is delivered into the child interface.
Host-originated traffic doesn’t hairpin the same way.

2) Is this a Docker bug?

No. Docker is using the kernel macvlan driver. The behavior is a known property of macvlan networking on Linux.
Treat it like kernel networking design, not an application regression.

3) What’s the cleanest fix if I need host↔container communication?

Create a host-side macvlan interface attached to the same parent, assign it an IP, and route container IPs via that interface.
This makes the host a first-class peer in that L2 segment.

4) Should I route a whole subnet to the shim or use /32 routes per container?

Route a dedicated prefix if you can. It’s operationally sane: fewer routes, fewer surprises, easier to document.
/32 routes are fine for a small number of containers or for tactical fixes.

5) Would ipvlan avoid this problem?

Often, yes—especially ipvlan L3. Ipvlan changes the model: less L2 identity, more explicit L3 routing.
It can also reduce MAC address sprawl and avoid port-security issues.

6) Do I need promiscuous mode?

On bare metal, not usually. In virtualized environments, often yes—because the hypervisor/vSwitch may drop frames for “unknown” MAC addresses.
If macvlan works on a physical host but fails in a VM, suspect hypervisor MAC filtering immediately.

7) My switch has port-security. Can I still use macvlan?

Maybe, but you need to know the MAC limit on that port and how macvlan changes MAC counts.
If you can’t raise the limit or get an exception, consider ipvlan or bridge networking.

8) How do I make containers reach the host?

Have containers talk to the host via the host’s macvlan shim IP, not the host’s parent IP.
Alternatively, separate host management and container service networks with VLANs and route between them.

9) Is macvlan safe for stateful storage services?

It can be, but don’t conflate “has its own IP” with “is isolated.” You still share the same physical NIC, queues, and upstream path.
For storage services, make your failure domains explicit: dedicated NICs/VLANs, predictable MTU, and tested failover.

10) Does macvlan break multicast or broadcast discovery?

Macvlan itself doesn’t automatically “break” broadcast on the LAN, but your environment might: VLAN boundaries, IGMP snooping, or security controls can change behavior.
If your app relies on L2 discovery, test it with real switches, not just a laptop and optimism.

Conclusion: practical next steps

If you’re stuck in “can’t reach the container,” stop staring at Docker logs. This is almost always routing and L2 semantics.
Confirm the asymmetric symptom (LAN works, host fails). Then implement the fix that matches your constraints:
a host-side macvlan shim with explicit routes, or a move to ipvlan if MAC sprawl is going to get you paged by the network team.

Next steps you can do today:

  • Decide on a dedicated container IP range and reserve a shim IP.
  • Implement the shim interface and a route that actually covers your container IPs.
  • Persist the configuration so reboots don’t resurrect the problem.
  • Write a two-minute runbook: “If host can’t reach container, check route to range via macvlan0.”
← Previous
Proxmox Ceph Slow Ops: Locate the Bottleneck (Disk, Network, or CPU)
Next →
Debian 13: Package pinning saved my server — how to use apt preferences without chaos

Leave a comment