Ubuntu 24.04 “Network is unreachable”: routing table truth serum (and fixes)

Was this helpful?

“Network is unreachable” is Linux being brutally honest. It’s not saying DNS is broken. It’s not saying the remote host is down. It’s saying: “I looked at my routing tables and I have no idea where to send this packet.”

On Ubuntu 24.04, that honesty can be inconvenient—especially when the interface is “UP,” the IP is present, and your coworker swears it worked “five minutes ago.” This is where you stop guessing and start interrogating the routing decision the kernel is actually making.

What “Network is unreachable” really means (kernel perspective)

When you see:

cr0x@server:~$ ping -c 1 1.1.1.1
ping: connect: Network is unreachable

That error is typically ENETUNREACH bubbling up from the kernel. Translation: the kernel tried to find a route to the destination and failed at the routing step—before it even tried ARP/NDP, before it touched the wire.

That distinction matters. If ARP fails, you often get “Destination Host Unreachable” (from the local host) or timeouts. If DNS fails, you get “Name or service not known.” If the route lookup fails, you get “Network is unreachable.” Linux isn’t being poetic; it’s giving you a specific failure mode.

There are a few variants worth knowing:

  • No matching route exists in any consulted routing table for that destination.
  • A route exists but it’s marked unreachable, prohibit, or blackhole (yes, those are real route types).
  • Policy routing (ip rules) sent the lookup to a table that doesn’t have a route.
  • IPv6 got selected but IPv6 routing is missing (or RA is off), so v6 looks unreachable even though v4 works.

One operational mindset shift: don’t “check the network.” Check the routing decision. Linux will tell you exactly what it thinks. You just have to ask the right way.

Joke #1: The routing table is like corporate org charts: everyone claims it’s accurate, and everyone is wrong until something breaks.

Routing table truth serum: how Linux decides

If you remember one command from this whole piece, make it this:

cr0x@server:~$ ip route get 1.1.1.1
1.1.1.1 via 192.0.2.1 dev ens18 src 192.0.2.10 uid 1000
    cache

ip route get is your truth serum. It forces the kernel to do the same route lookup it will do for real traffic, and it prints the chosen next hop, outgoing interface, and source address selection.

Three key points that routinely surprise smart people:

  1. The kernel chooses a source IP. If that source IP is wrong (wrong subnet, deprecated, or from a different VRF/table), replies won’t come back. Sometimes the symptom is “unreachable,” sometimes it’s a timeout. Either way, read the src.
  2. Rules may override routes. The route you see in main isn’t necessarily the route used. ip rule can send traffic into a different table based on source, fwmark, UID, or other selectors.
  3. Metrics decide fights. Multiple default routes are common on laptops (Wi-Fi + VPN + Docker + “helpful” tooling). The kernel will pick the lowest metric that matches, not the one you meant.

The routing decision stack (roughly)

For most Ubuntu 24.04 hosts, the path looks like this:

  • Application asks to connect; kernel builds a flow tuple (dst, src if bound, mark, etc.).
  • ip rule list is consulted top-down. Each rule chooses a routing table lookup (or says “prohibit”).
  • A matching route is found: exact host route, then more specific prefix, then default.
  • Next hop is resolved: direct connected route uses ARP/NDP for L2 neighbor; via gateway uses ARP/NDP for gateway neighbor.
  • Then we finally get to “is the interface up,” “is there carrier,” “does the neighbor resolve.”

So: if you’re seeing “Network is unreachable,” you’re failing before neighbor resolution. Your mission is to find which lookup produced no usable route—and why.

There’s a paraphrased idea from Werner Vogels that fits operations: Everything fails, all the time; your job is to design for it (paraphrased idea). Routing is one of those “everything fails” layers—quietly, until it doesn’t.

Fast diagnosis playbook (first/second/third)

When you’re on a pager, you don’t want a networking seminar. You want a fast path to the bottleneck. Here’s the order that saves the most time in practice.

First: ask the kernel what it would do

  1. ip route get <dst> for the exact destination IP. If it errors, routing is missing. If it picks an unexpected interface or source IP, routing exists but is wrong.
  2. ip rule to see whether policy routing is diverting your traffic into a table you forgot about.

Second: verify the basics that create routes

  1. ip addr on the chosen interface: correct address? correct prefix length? not tentative?
  2. ip route and ip -6 route: do you have a default route? is it in the table you’re actually using?
  3. Netplan/NetworkManager/systemd-networkd status: is the config source actually applying what you think it is?

Third: only then chase L2 and external issues

  1. ip neigh / ip -6 neigh: can you resolve the gateway MAC?
  2. ping the gateway and an on-link host.
  3. tcpdump only when you need proof, not therapy.

The playbook is intentionally routing-first. “Network is unreachable” is usually the kernel telling you it never even tried to send.

Practical tasks: commands, outputs, decisions (12+)

These are the tasks I run in production. Each one has three parts: the command, what typical output means, and what decision you make next.

Task 1: Reproduce the error with an IP, not a name

cr0x@server:~$ ping -c 1 1.1.1.1
ping: connect: Network is unreachable

What it means: Not DNS. Not firewall. The kernel route lookup failed (or an unreachable/prohibit route exists).

Decision: Go straight to ip route get and ip rule. Don’t waste time in resolv.conf.

Task 2: Ask the kernel for the route decision

cr0x@server:~$ ip route get 1.1.1.1
RTNETLINK answers: Network is unreachable

What it means: The routing tables consulted (under current policy rules) do not produce a usable route.

Decision: Inspect policy routing first (ip rule), then inspect route tables (ip route show table ...).

Task 3: Check policy routing rules (the usual culprit in “it worked yesterday”)

cr0x@server:~$ ip rule
0:      from all lookup local
100:    from 10.10.0.0/16 lookup 100
32766:  from all lookup main
32767:  from all lookup default

What it means: Traffic sourced from 10.10.0.0/16 uses table 100. If your app binds that source (or the kernel picks it), you might be routing in a different universe.

Decision: Check table 100 routes. If it’s empty or missing default, you found your “unreachable.”

Task 4: Show routes in main table (IPv4)

cr0x@server:~$ ip route
192.0.2.0/24 dev ens18 proto kernel scope link src 192.0.2.10

What it means: You have only a connected route. No default route. The host can talk to its own subnet, and that’s it.

Decision: Add/fix default gateway via Netplan/NetworkManager, not by hand (unless you’re triaging).

Task 5: Show routes in a specific table (policy routing table 100)

cr0x@server:~$ ip route show table 100
10.10.0.0/16 dev wg0 proto kernel scope link src 10.10.0.5

What it means: Table 100 can reach the VPN subnet only. If table 100 is used for internet-bound traffic, you’ll get “unreachable.”

Decision: Either add a default route to table 100 (if intentional) or fix the rule that sends traffic there.

Task 6: Confirm the default route exists (and isn’t competing)

cr0x@server:~$ ip route show default
default via 192.0.2.1 dev ens18 proto dhcp src 192.0.2.10 metric 100
default via 198.51.100.1 dev ens19 proto dhcp src 198.51.100.10 metric 50

What it means: Two defaults. The kernel will prefer metric 50 (ens19). That may be a disconnected network, a management VLAN, or a dead uplink.

Decision: Decide which interface should own default. Fix metrics or remove the unwanted default at the config layer.

Task 7: See addresses and prefixes (wrong netmask = silent chaos)

cr0x@server:~$ ip -br addr
lo               UNKNOWN        127.0.0.1/8 ::1/128
ens18            UP             192.0.2.10/32 fe80::a00:27ff:fe4e:66a1/64

What it means: /32 on a LAN interface is almost never correct unless you’re doing point-to-point routing with explicit on-link routes. With /32, the kernel won’t consider the gateway on-link unless told explicitly.

Decision: Fix the prefix length in Netplan/DHCP config. As a temporary band-aid, you can add an on-link route for the gateway (but you should feel slightly ashamed).

Task 8: Check whether the gateway is considered on-link

cr0x@server:~$ ip route get 192.0.2.1
RTNETLINK answers: Network is unreachable

What it means: Even the gateway address can’t be reached. This is classic wrong prefix length or missing connected route.

Decision: Fix addressing first; routing depends on it.

Task 9: If routes look right, check L2 neighbor state for the gateway

cr0x@server:~$ ip neigh show dev ens18
192.0.2.1 lladdr 00:11:22:33:44:55 REACHABLE

What it means: ARP resolution works. If you still can’t reach off-subnet destinations, the problem is upstream (gateway routing/NAT/firewall) or policy routing.

Decision: Ping the gateway, then trace route decisions beyond it. If neighbor is FAILED, focus on VLANs, bridge config, cabling, or security controls.

Task 10: Quick health check on link state and driver counters

cr0x@server:~$ ip -s link show dev ens18
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:4e:66:a1 brd ff:ff:ff:ff:ff:ff
    RX:  bytes packets errors dropped  missed   mcast
    1249389   15234      0       0       0     123
    TX:  bytes packets errors dropped carrier collsns
     893221   10112      0       0       0       0       0

What it means: LOWER_UP indicates carrier. Errors/collisions/drops give you a sniff test for physical or virtual switch issues.

Decision: If carrier is down, stop routing analysis and fix link. If carrier is up and counters are clean, return to routing/policy.

Task 11: Inspect Netplan’s generated config (catch mismatches between intent and reality)

cr0x@server:~$ sudo netplan get
network:
  version: 2
  ethernets:
    ens18:
      dhcp4: true

What it means: Your system expects DHCP for IPv4. If there is no default route, the DHCP server may not be sending router option, or the client is not applying it.

Decision: Confirm the renderer and logs (NetworkManager vs systemd-networkd), then check DHCP lease details.

Task 12: Identify the renderer and active network stack

cr0x@server:~$ networkctl status -n0
WARNING: systemd-networkd is not running, output will be incomplete.

What it means: systemd-networkd likely isn’t the active renderer. NetworkManager probably is (common on desktops; sometimes on servers too).

Decision: Don’t edit the wrong tool. Check NetworkManager state next.

Task 13: Check NetworkManager device and connection state

cr0x@server:~$ nmcli -f GENERAL.STATE,GENERAL.CONNECTION,IP4.GATEWAY,IP4.ROUTE dev show ens18
GENERAL.STATE:                         100 (connected)
GENERAL.CONNECTION:                    Wired connection 1
IP4.GATEWAY:                           --
IP4.ROUTE[1]:                          dst = 192.0.2.0/24, nh = 0.0.0.0, mt = 100

What it means: Connected but no gateway. That’s exactly how you get “Network is unreachable” for off-subnet traffic.

Decision: Fix the connection profile (gateway, DHCP options, or “never-default” flags). Restarting the interface won’t invent a gateway.

Task 14: Inspect DHCP lease info for router option

cr0x@server:~$ journalctl -u NetworkManager -b | tail -n 8
Dec 30 10:14:22 server NetworkManager[1023]: <info>  [1735553662.1234] dhcp4 (ens18): option routers: (none)
Dec 30 10:14:22 server NetworkManager[1023]: <info>  [1735553662.1236] dhcp4 (ens18): option subnet_mask: 255.255.255.0
Dec 30 10:14:22 server NetworkManager[1023]: <info>  [1735553662.1238] dhcp4 (ens18): state changed bound -> bound

What it means: DHCP server didn’t provide a router (default gateway). Your host is behaving correctly; the network isn’t.

Decision: Escalate to network/DHCP owners, or set a static gateway if this segment requires it.

Task 15: Temporary triage—add a default route (don’t confuse triage with a fix)

cr0x@server:~$ sudo ip route add default via 192.0.2.1 dev ens18 metric 100
cr0x@server:~$ ip route get 1.1.1.1
1.1.1.1 via 192.0.2.1 dev ens18 src 192.0.2.10 uid 0
    cache

What it means: You’ve restored routing for now.

Decision: Immediately implement the permanent change in Netplan/NM. Manual routes disappear on reboot and during network restarts, which is a fun surprise at 02:00.

Task 16: Prove whether policy routing is selecting a different table by source

cr0x@server:~$ ip route get 1.1.1.1 from 10.10.0.5
RTNETLINK answers: Network is unreachable

What it means: With source 10.10.0.5 (likely a VPN interface), the lookup fails. That’s a policy routing problem, not “internet is down.”

Decision: Fix ip rule and table routes. Many VPN clients install rules that make sense for laptops and nonsense for servers.

Task 17: Confirm reverse path filtering isn’t masquerading as routing failure

cr0x@server:~$ sysctl net.ipv4.conf.all.rp_filter net.ipv4.conf.ens18.rp_filter
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.ens18.rp_filter = 1

What it means: Strict-ish rp_filter can drop replies in asymmetric routing scenarios. It typically causes timeouts, but in some cases you’ll see confusing ICMP errors depending on the path.

Decision: If you have multi-homing, policy routing, or VRFs, consider 2 (loose) for relevant interfaces, but do it intentionally and document why.

Common mistakes: symptom → root cause → fix

1) “ping: connect: Network is unreachable” to any off-subnet IP

Symptom: Can ping own IP and maybe the gateway, but cannot ping public IPs.

Root cause: Missing default route (no router option from DHCP, static config incomplete, or renderer mismatch).

Fix: Add correct default gateway in the real config system (Netplan/NM/systemd-networkd). Verify with ip route show default and ip route get.

2) “Network is unreachable” only for some sources / only in containers

Symptom: Host can reach the destination, but traffic from a specific source address (VPN, container, secondary NIC) gets unreachable.

Root cause: Policy routing rule sends that traffic into an empty table, or table lacks default route.

Fix: Audit ip rule, then ip route show table X. Either add the missing route(s) to that table or remove/adjust the rule.

3) Default route exists, but “unreachable” persists for a specific destination prefix

Symptom: Internet works, but a particular RFC1918 range or corporate subnet is “unreachable.”

Root cause: An unreachable route exists for that prefix (sometimes installed by VPN clients, kube tooling, or security agents) or a more specific wrong route overrides the default.

Fix: Search for the specific prefix: ip route show match 10.0.0.0/8. Delete or correct the route at its source (connection profile, VPN script, netplan).

4) “Network is unreachable” after changing netmask/prefix

Symptom: You changed /24 to /32 (or vice versa) and now the gateway is “unreachable.”

Root cause: Kernel doesn’t consider the gateway on-link; connected route doesn’t cover the gateway IP.

Fix: Correct the prefix length. If you truly need /32, add explicit on-link route to the gateway and a default route via it (and expect to explain yourself later).

5) Two NICs, two defaults, intermittent reachability

Symptom: Works, then stops, especially after reboot or DHCP renew.

Root cause: Competing default routes with changing metrics; kernel picks a different egress path than expected.

Fix: Set metrics explicitly and pin “default route ownership.” On servers, be boring: one default, explicit routes for everything else.

6) IPv6 destinations unreachable while IPv4 works (or vice versa)

Symptom: Curl to a hostname fails quickly with unreachable; forcing IPv4 works.

Root cause: IPv6 is preferred but has no default route (missing RA, disabled accept_ra, or firewall blocking NDP/RA).

Fix: Either properly configure IPv6 routing (RA/DHCPv6/static) or disable IPv6 for the interface/workload intentionally. Don’t leave it half-on.

7) “Unreachable” inside a namespace / VRF but not on host

Symptom: Host can reach, but netns or VRF cannot.

Root cause: Separate routing tables per namespace/VRF; you fixed the host main table but not the namespace table.

Fix: Run ip route inside the namespace or VRF context and configure routes there.

Ubuntu 24.04 specifics: Netplan, NetworkManager, systemd-networkd

Ubuntu has an elegant idea and a messy reality: Netplan is a declarative frontend, and the backend renderer is either NetworkManager or systemd-networkd. On Ubuntu 24.04, both are common depending on whether you’re on desktop, cloud images, or a “server” someone installed with a GUI because they like clicking things.

Operational rule: don’t fix routing in the wrong layer. If NetworkManager owns the interface, editing a networkd unit won’t do anything. If networkd owns it, nmcli changes won’t stick.

How to tell who owns what

Start with services:

cr0x@server:~$ systemctl is-active NetworkManager
active
cr0x@server:~$ systemctl is-active systemd-networkd
inactive

Decision: If NetworkManager is active and networkd is inactive, treat NM as source of truth for runtime config. Netplan may still generate NM config.

Apply Netplan safely

When you change Netplan, use try on remote boxes. It auto-rolls back if you lose connectivity. This is the closest thing Linux networking has to a seatbelt.

cr0x@server:~$ sudo netplan try
Do you want to keep these settings?

Press ENTER before the timeout to accept the new configuration

Changes will revert in 120 seconds

Decision: If you’re SSH’d in over the interface you’re changing, use try. If you like living dangerously, use apply and enjoy your walk to the data center.

Common Netplan route pitfalls

  • Gateway missing: DHCP isn’t providing it, or you used static addressing without routes or gateway4.
  • Wrong renderer: configuration appears correct but is not applied.
  • Metrics: Netplan supports route metrics; if you don’t set them on multi-NIC systems, you’re leaving defaults to chance.

Joke #2: Changing network config over SSH is a great way to practice mindfulness. You become very present when the prompt stops responding.

IPv6-specific “unreachable” traps

Ubuntu 24.04 generally has IPv6 enabled by default. That’s good. The trap is partial IPv6: addresses exist, but routing doesn’t, so apps that prefer IPv6 fail fast.

Task: See IPv6 routes and whether a default exists

cr0x@server:~$ ip -6 route show default
default via fe80::1 dev ens18 proto ra metric 100 pref medium

What it means: You have an IPv6 default route learned via Router Advertisement (RA). If it’s missing, IPv6 traffic may be unreachable.

Decision: If you need IPv6, ensure RA is accepted (accept_ra), and that firewalling isn’t blocking RA/NDP. If you don’t need IPv6, disable it explicitly per interface or at sysctl—don’t let it half-work.

Task: Force IPv4 to validate the hypothesis

cr0x@server:~$ curl -4 -sS --connect-timeout 3 https://example.com
<html>...</html>

What it means: IPv4 path is fine; IPv6 is the broken leg.

Decision: Fix IPv6 routing or adjust address selection / disable IPv6 for the service path. Don’t blame “the internet.”

Task: Check NDP neighbor state for the default gateway

cr0x@server:~$ ip -6 neigh show dev ens18
fe80::1 lladdr 00:11:22:33:44:55 router REACHABLE

What it means: L2 adjacency for IPv6 is healthy. If unreachable persists, focus on routes/rules or upstream.

Decision: If neighbor is FAILED or INCOMPLETE, troubleshoot VLANs, RA filtering, or switch security features.

Policy routing and multiple tables (the grown-up problems)

Policy routing is where simple networks go to become “interesting.” It’s also where “Network is unreachable” gets misdiagnosed as a cabling issue, because someone looks only at ip route (main table) and declares it fine.

How policy routing creates unreachable

Imagine:

  • You have a VPN interface wg0 with 10.10.0.5/16.
  • A rule says: “From 10.10.0.0/16, lookup table 100.”
  • Table 100 has only the connected VPN subnet, no default route.

Any flow that ends up with source 10.10.0.5 now can’t reach the internet. The kernel tries table 100, finds nothing, and says “Network is unreachable.” That’s correct behavior. Your design is incomplete.

Task: Dump all rule priorities (order matters)

cr0x@server:~$ ip -details rule
0:      from all lookup local
100:    from 10.10.0.0/16 lookup 100
32766:  from all lookup main
32767:  from all lookup default

What it means: The kernel will hit priority 100 before main. If it matches, main is irrelevant.

Decision: Fix the rule or populate the table. Don’t “just add a default to main” and hope.

Task: Show route types that explicitly block traffic

cr0x@server:~$ ip route show type unreachable
unreachable 10.0.0.0/8 metric 1024

What it means: The system is intentionally refusing to route that prefix. This may be a security hardening artifact, a VPN client “protecting you,” or an old incident band-aid that never got removed.

Decision: Identify who installed it (NM connection, netplan, scripts, security agent). Remove it at the source, not manually, unless you’re in emergency triage.

Task: Verify selection with marks (if you use nftables/iptables marking)

cr0x@server:~$ ip rule | grep -n fwmark
12:200:  from all fwmark 0x1 lookup 200

What it means: Marked traffic uses table 200. If table 200 lacks a route, marked flows get unreachable while unmarked flows work.

Decision: Confirm your firewall mark rules, and ensure table 200 has a default route or appropriate prefixes.

Three corporate-world mini-stories (anonymized)

Incident caused by a wrong assumption: “DHCP always gives a gateway”

In a mid-size enterprise, a team rolled out a new Ubuntu 24.04 image for build agents. These boxes lived on a “tools” VLAN that historically used static addressing. Someone migrated the VLAN to DHCP to reduce manual work, and everyone cheered because spreadsheets are not infrastructure.

The image used DHCP, picked up an address, and happily reported “connected.” But builds started failing the moment they needed to fetch dependencies from the internet. The error was immediate: “Network is unreachable.” Engineers assumed the proxy was down, then blamed DNS, then blamed the firewall team (always a crowd favorite).

The actual issue was mundane: the DHCP scope delivered an IP and subnet mask, but it didn’t deliver the router option. Half the estate never noticed because they were still static or had cached routes; the new build agents were the only ones clean enough to fail consistently.

What fixed it wasn’t heroics. They confirmed it in logs (“option routers: (none)”), added the router option to the DHCP scope, and the problem vanished. Post-incident, the lesson became a checklist item: new subnets must be validated for IP, mask, router, DNS, and MTU—not “DHCP works.”

Optimization that backfired: “Two defaults for redundancy”

A different shop had dual-homed application servers: one interface on the production network, one on a management network. Someone decided to be clever and add a default route on the management interface “just in case production goes down.” Redundancy, right?

It worked during quiet hours. Then a maintenance window introduced minor packet loss on the production uplink. Linux, being rational and unromantic, kept selecting the lowest-metric default—which happened to shift after a DHCP renew. Some servers started sending production traffic out the management interface. Return traffic didn’t match. rp_filter dropped packets. Customers saw intermittent failures that were nearly impossible to reproduce in a lab.

The optimization didn’t create redundancy. It created nondeterminism. And nondeterminism is just unreliability wearing a suit.

The fix was boring: one default route, always; explicit routes for management; and documented metrics pinned in configuration. “Redundancy” became a proper routing design (failover with routing protocols or monitored changes), not a pair of competing defaults hoping to behave.

Boring but correct practice that saved the day: “ip route get as standard procedure”

At a company with a heavy Kubernetes footprint, nodes occasionally reported “Network is unreachable” when pulling images. The first instinct was to blame the registry or DNS. But the SRE team had a habit: every network complaint starts with ip route get.

That habit paid off. On an affected node, ip route get showed egress via a CNI-created interface, with a source address that belonged to a pod subnet. That was never supposed to happen for host traffic. The team immediately pivoted to policy routing rules and found a stale rule from an old VPN agent that matched more broadly than intended.

They removed the agent, cleaned up the rule, and then added a unit test-style check to node provisioning: ensure host-default egress uses the primary NIC and the expected source. Nothing fancy—just a gate that catches drift before it becomes an outage.

It wasn’t glamorous, but it avoided a long rabbit hole of packet captures and blame. Boring is a feature in operations.

Interesting facts and quick history points

  • Linux “iproute2” replaced “net-tools” (like route, ifconfig) because modern routing needs policy rules, multiple tables, and better visibility.
  • ip route get mirrors the kernel FIB lookup and includes source selection—something old tools barely exposed.
  • Route metrics aren’t a tie-breaker; they’re the decision when multiple routes match. Lowest metric wins, even if it makes humans sad.
  • Linux supports route types like blackhole, unreachable, and prohibit to intentionally block traffic at the routing layer, not the firewall layer.
  • Policy routing via ip rule has been around for decades, widely used by ISPs and multi-homed systems long before “cloud networking” made everyone rediscover it.
  • IPv6 default routes often come from Router Advertisements, not DHCP. Blocking ICMPv6 can quietly break routing, not just “ping.”
  • Reverse path filtering (rp_filter) was designed to mitigate spoofing but interacts badly with asymmetric routing unless tuned deliberately.
  • Network namespaces mean every namespace has its own routing tables. Fixing the host doesn’t fix a container namespace, and vice versa.

Checklists / step-by-step plan

Checklist A: When you see “Network is unreachable” on Ubuntu 24.04

  1. Reproduce with an IP: ping -c 1 1.1.1.1.
  2. Run ip route get 1.1.1.1.
  3. If it errors: inspect ip rule, then ip route (and specific tables).
  4. If it returns a route: confirm dev and src are what you expect.
  5. Check default routes: ip route show default. Remove ambiguity.
  6. Verify addresses/prefix lengths: ip -br addr.
  7. Check neighbor to gateway: ip neigh show dev <dev>.
  8. Confirm which renderer applies config: systemctl is-active NetworkManager and systemctl is-active systemd-networkd.
  9. Implement fix in the owning config layer (Netplan/NM/networkd), not with one-off commands.
  10. Re-test with ip route get and a real connection (curl -4/curl -6 as needed).

Checklist B: Permanent fixes (choose your tool, commit to it)

  • If using Netplan + networkd: define static routes/gateway in Netplan YAML, then sudo netplan apply.
  • If using Netplan + NetworkManager: edit Netplan YAML or NM connection profiles, but don’t mix ad-hoc manual routes with persistent profiles.
  • If you have multiple uplinks: explicitly set route metrics and add explicit routes for non-default networks.
  • If you use VPNs: verify policy routing tables contain a default route if you expect full tunnel, or narrow routes if split tunnel.
  • If you run containers: validate that host routing is not accidentally affected by CNI, Docker, or security agents adding rules/routes.

FAQ

1) Why does “Network is unreachable” happen even when the interface is UP?

Because link state is not routing state. The interface can be UP with a valid IP and still have no default route, wrong netmask, or policy rules sending traffic into an empty table.

2) What’s the fastest single command to diagnose this?

ip route get <destination-ip>. If it errors, routing lookup failed. If it returns a route, read dev and src and validate they make sense.

3) I have a default route, so why is it still unreachable?

Either a more specific route overrides it (including unreachable/blackhole), or policy routing sends the lookup to a different table. Check ip rule and search routes for the destination prefix.

4) How do I tell if IPv6 is the reason my hostname fails?

Force IPv4: curl -4 or ping -4. If IPv4 works and the normal call fails quickly, your system may be selecting IPv6 without a valid IPv6 default route.

5) Is adding a default route with ip route add default a valid fix?

It’s valid triage. It is not a durable fix. It will disappear on reboot and can be overwritten by DHCP renewals or network restarts. Make the change persistent in Netplan/NetworkManager/networkd.

6) What’s the difference between “Network is unreachable” and “Destination Host Unreachable”?

“Network is unreachable” typically means route lookup failed. “Destination Host Unreachable” often means a route exists, but ARP/NDP or downstream routing failed, producing an ICMP unreachable message.

7) Can Docker or Kubernetes cause “Network is unreachable” on the host?

Yes. They add interfaces, routes, and sometimes policy rules. Most of the time they behave. When they don’t, you’ll see unexpected ip rule entries, extra routes, or source selection that points to CNI/Docker networks.

8) Why does it work for root but not for a service user (or vice versa)?

Policy routing can match UID (via ip rule uidrange) or fwmarks applied by cgroups/iptables/nftables. Check ip rule and whether marked/UID-specific rules exist.

9) What if the gateway itself is “unreachable”?

That usually means your connected route doesn’t include the gateway IP (wrong prefix length), or the interface/address isn’t actually configured as you think. Fix IP/prefix first; routing comes after.

10) Should I disable rp_filter to fix this?

Only if you understand your routing asymmetry story. rp_filter issues typically manifest as dropped replies/timeouts more than route lookup failures. Tune it deliberately, and document the reason.

Conclusion: practical next steps

“Network is unreachable” is not a vibe. It’s a routing verdict. Treat it like one.

Next time you see it on Ubuntu 24.04, do this:

  1. Run ip route get <dst> and believe the output.
  2. Check ip rule before you stare at ip route and declare victory.
  3. Fix the root cause in the owning configuration layer (Netplan/NetworkManager/systemd-networkd), not with a heroic one-liner.
  4. On multi-homed systems, make route metrics and default route ownership explicit. Ambiguity is not redundancy.
  5. If IPv6 is enabled, make it fully correct—or intentionally off. Half-configured IPv6 is a recurring source of fast failures.

Do that, and “Network is unreachable” becomes less of a mystery and more of a helpful coworker: blunt, accurate, and occasionally annoying.

← Previous
RGB over FPS: the decade LEDs took over gaming PCs
Next →
ZFS Compression Benchmarks: Measuring Real Gains, Not Placebo

Leave a comment