Ubuntu 24.04: Netplan changes don’t apply — why (and how to make them stick)

Was this helpful?

You edit a Netplan YAML, run netplan apply, and nothing changes. Or worse: it changes for a minute, then flips back after reboot like the server is haunted.
Meanwhile you’ve got a maintenance window, an app team breathing down your neck, and your SSH session is one typo away from becoming a short story titled “I locked myself out.”

Ubuntu 24.04 didn’t “break networking.” It just made the existing truth harder to ignore: Netplan is a compiler, not a daemon, and multiple actors can write, override, or ignore the config you think you own.
If you want changes to stick, you need to identify who actually controls the interface, then make exactly one authority responsible.

Fast diagnosis playbook

When Netplan changes don’t apply, the fastest path is to stop guessing and answer three questions:
(1) Who renders networking? (2) Who overwrites the YAML? (3) What did Netplan generate?
You can do that in under five minutes if you stay disciplined.

1) Confirm the active renderer (NetworkManager or systemd-networkd)

  • Check which service is actually managing the interface (not which one you prefer).
  • If NetworkManager is managing it, editing renderer: networkd won’t magically win.

2) Check whether cloud-init is rewriting networking on boot

  • If you’re on a cloud image (or something derived from one), assume cloud-init has opinions.
  • Look for /etc/netplan/50-cloud-init.yaml and related logs.

3) Validate the YAML and inspect generated output

  • Netplan compiles YAML into backend configs.
  • If generated files don’t match what you expect, Netplan didn’t understand your YAML or didn’t read it.

4) Apply safely (don’t self-DDoS your own SSH)

  • Use netplan try for remote changes.
  • Then verify with ip addr, ip route, and DNS resolution.

If you do only one thing: stop editing random YAML files until you’ve identified which YAML file is authoritative and which renderer is active.
Otherwise you’re just re-enacting the same failure with more confidence.

How Netplan actually works (and why your edits vanish)

Netplan is not your network manager. Netplan is a configuration abstraction layer and generator:
you declare intent in YAML under /etc/netplan/, and Netplan generates configuration for a backend renderer—most commonly
systemd-networkd on servers, and NetworkManager on desktops (and increasingly on “server-ish” installs where people want GUI tooling or Wi‑Fi).

That distinction matters because it creates three common failure modes:

  1. You edited the wrong file (a file exists, but it’s not the one being used after precedence/lexicographic ordering).
  2. The renderer isn’t what you think (NetworkManager is controlling the interface while you’re writing networkd-style intent).
  3. Another system rewrites it (cloud-init is the usual suspect, but not the only one).

And yes: you can run netplan apply and still see “nothing change” if the backend service rejects the generated config, the interface is unmanaged, or the YAML was ignored.
Netplan’s job is mostly done once it writes generated files and asks the backend to reload.

Reliability people love clear ownership. Netplan setups fail when ownership is fuzzy:
NetworkManager controls some devices, networkd controls others, cloud-init writes YAML, and admins “fix” it by editing whichever file looks most obvious.
That’s how you get a system where everything is configured, and nothing works.

Facts and historical context you can use in arguments

A few concrete points you can drop into a design review or a postmortem, because “it feels like” isn’t a root cause.

  1. Netplan first shipped in Ubuntu 17.10 as Canonical’s attempt to unify networking config across installers and environments.
  2. Netplan is declarative YAML, but the machine ultimately runs a renderer’s native config (networkd units or NetworkManager profiles).
  3. File order matters: Netplan reads YAML files in lexical order; later files can override earlier ones.
  4. Cloud images commonly use cloud-init to generate 50-cloud-init.yaml, and it can rewrite it during boot depending on datasource and settings.
  5. NetworkManager can manage “server” NICs now, especially where people want consistent tooling across laptops and servers.
  6. systemd-networkd is not NetworkManager: it’s lighter, more deterministic, and often preferred for headless servers—but it won’t touch interfaces it doesn’t manage.
  7. Ubuntu has moved DNS handling around over the years (resolvconf, then systemd-resolved, plus NM’s own behaviors). DNS “not applying” is often a resolver layering issue, not Netplan.
  8. “Apply” isn’t always “persist”: a runtime change may occur but be overwritten on reboot by cloud-init, automation, or a different renderer.

Typical symptoms and what they usually mean

Symptom: netplan apply runs clean, but the IP never changes

Usually: the interface is not managed by the renderer you’re generating for, or the YAML you edited is not the one being used.
Occasionally: the backend rejected the configuration and fell back without making noise in your terminal.

Symptom: it works until reboot, then reverts

Usually: cloud-init rewrote the file at boot, or a configuration management system (or image “first boot” script) re-applied an old config.
Less often: you edited a file that loses in lexical ordering to another YAML file.

Symptom: routes are wrong (can ping gateway but not the world)

Usually: default route wasn’t applied, a second interface is winning the default route, or policy routing is present (VPNs and overlay networks love this).

Symptom: DNS doesn’t match what you put in YAML

Usually: systemd-resolved is using a different upstream, NetworkManager is pushing DNS, or you’re looking at /etc/resolv.conf without checking if it’s a stub or symlink.

One quote to keep your hands steady during incident response:
“Hope is not a strategy.” — James Cameron

Joke #1: If your network config “applied” but nothing changed, congratulations—you’ve achieved enterprise-grade ambiguity.

Practical tasks: commands, output meaning, and decisions

These are real, runnable checks. Each one includes: what you run, what the output tells you, and what you decide next.
Do them in order when you’re on-call and your future self deserves mercy.

Task 1: List Netplan files and see what might override what

cr0x@server:~$ ls -l /etc/netplan/
total 8
-rw-r--r-- 1 root root  348 Nov  2 10:14 00-installer-config.yaml
-rw-r--r-- 1 root root  512 Nov  2 10:15 50-cloud-init.yaml

Meaning: Netplan reads YAML in lexical order. If both define the same interface, the later file (here 50-cloud-init.yaml) can win.
Decision: If you edited 00-installer-config.yaml expecting it to stick, you probably lost the override battle. Either modify the authoritative file or remove the source of overrides.

Task 2: See what Netplan thinks the merged config is

cr0x@server:~$ sudo netplan get
network:
  ethernets:
    eno1:
      dhcp4: false
      addresses:
      - 10.20.30.40/24
      routes:
      - to: default
        via: 10.20.30.1
      nameservers:
        addresses:
        - 10.20.0.53
        - 10.20.0.54
  version: 2
  renderer: networkd

Meaning: This is the effective configuration after parsing and merging files.
Decision: If netplan get doesn’t show your changes, Netplan isn’t reading your edit (wrong file, YAML error, or overwritten). Fix that before touching backends.

Task 3: Validate syntax (and catch invisible YAML footguns)

cr0x@server:~$ sudo netplan --debug generate
DEBUG:command generate: running ['/usr/libexec/netplan/generate']
DEBUG:Parsing /etc/netplan/00-installer-config.yaml
DEBUG:Parsing /etc/netplan/50-cloud-init.yaml
DEBUG:eno1: setting default backend to 1
DEBUG:Configuration is valid

Meaning: “Configuration is valid” is necessary, not sufficient. But if this fails, you stop and fix YAML.
Decision: If debug output parses a different file than you edited, you’re working on the wrong input.

Task 4: Inspect generated backend config (what actually gets applied)

cr0x@server:~$ sudo ls -l /run/systemd/network/
total 4
-rw-r--r-- 1 root root 682 Nov  2 10:16 10-netplan-eno1.network

Meaning: Netplan generated a networkd unit at runtime.
Decision: Open it. If it doesn’t contain your addresses/routes, Netplan isn’t generating what you think. If it does, the problem is backend application or interface control.

Task 5: Read the generated file to confirm the exact applied intent

cr0x@server:~$ sudo sed -n '1,120p' /run/systemd/network/10-netplan-eno1.network
[Match]
Name=eno1

[Network]
Address=10.20.30.40/24
DNS=10.20.0.53 10.20.0.54
Gateway=10.20.30.1

Meaning: If this matches, Netplan did its job. Now you’re debugging networkd (or conflicts).
Decision: If this does not match, go back to Netplan YAML and file ordering; don’t “fix” networkd directly unless you want to fight Netplan every reboot.

Task 6: Identify which service is managing the link

cr0x@server:~$ networkctl status eno1
● 2: eno1
                 Link File: /usr/lib/systemd/network/99-default.link
              Network File: /run/systemd/network/10-netplan-eno1.network
                      Type: ether
                     State: routable (configured)
              Online state: online
                   Address: 52:54:00:12:34:56

Meaning: networkd is managing eno1, and it considers it configured.
Decision: If state is “unmanaged” or “configuring,” you look for why networkd isn’t taking it (wrong match name, renderer mismatch, or NM grabbing it).

Task 7: Check if NetworkManager is also involved (and therefore capable of undoing you)

cr0x@server:~$ nmcli dev status
DEVICE  TYPE      STATE      CONNECTION
eno1    ethernet  unmanaged  --
lo      loopback  unmanaged  --

Meaning: NM isn’t managing the device. That’s good if you’re using networkd.
Decision: If NM shows connected with a profile on the interface, stop and decide whether NM or networkd is your intended owner—then make Netplan renderer match that.

Task 8: Verify actual runtime IP and route (don’t trust any single tool)

cr0x@server:~$ ip -br addr show eno1
eno1             UP             10.20.30.40/24
cr0x@server:~$ ip route show
default via 10.20.30.1 dev eno1 proto static
10.20.30.0/24 dev eno1 proto kernel scope link src 10.20.30.40

Meaning: This is ground truth. If it’s wrong, the change didn’t apply, or got overridden after applying.
Decision: If runtime is correct but connectivity is broken, move to DNS/firewall/routing upstream. If runtime is wrong, go back to renderer control and generated configs.

Task 9: Check systemd-networkd logs for rejection or flaps

cr0x@server:~$ sudo journalctl -u systemd-networkd -b --no-pager | tail -n 20
Nov 02 10:16:21 server systemd-networkd[812]: eno1: Configuring with /run/systemd/network/10-netplan-eno1.network.
Nov 02 10:16:21 server systemd-networkd[812]: eno1: Gained carrier
Nov 02 10:16:22 server systemd-networkd[812]: eno1: DHCPv4 client: disabled
Nov 02 10:16:22 server systemd-networkd[812]: eno1: Setting addresses
Nov 02 10:16:22 server systemd-networkd[812]: eno1: Setting routes
Nov 02 10:16:22 server systemd-networkd[812]: eno1: Configured

Meaning: You can see if networkd applied static config, tried DHCP, failed to set routes, etc.
Decision: Any errors here trump your assumptions. Fix the logged problem rather than rewriting YAML randomly.

Task 10: Check if cloud-init is controlling networking on this host

cr0x@server:~$ sudo cloud-init status --long
status: done
boot_status_code: enabled-by-generator
detail:
  DataSource: DataSourceNoCloud [seed=/var/lib/cloud/seed/nocloud-net]
  errors: []

Meaning: Cloud-init ran and has a datasource; it may also be managing network config depending on config.
Decision: If you see cloud-init and a 50-cloud-init.yaml, assume it can overwrite your work at next boot. Decide whether to disable cloud-init networking or feed it the right network config.

Task 11: See whether cloud-init has been writing Netplan YAML recently

cr0x@server:~$ sudo journalctl -u cloud-init -b --no-pager | grep -i netplan | tail -n 20
Nov 02 10:10:03 server cloud-init[601]: Generating network configuration from datasource
Nov 02 10:10:03 server cloud-init[601]: Writing to /etc/netplan/50-cloud-init.yaml
Nov 02 10:10:04 server cloud-init[601]: Running command ['netplan', 'generate']

Meaning: This is the smoking gun. If cloud-init writes that file, your manual edits can be overwritten on boot.
Decision: Either stop cloud-init from managing networking or make your changes via cloud-init’s network config so it writes the desired state.

Task 12: Confirm file permissions/ownership (Netplan will ignore unsafe files)

cr0x@server:~$ sudo stat -c '%A %U:%G %n' /etc/netplan/*.yaml
-rw------- root:root /etc/netplan/00-installer-config.yaml
-rw-r--r-- root:root /etc/netplan/50-cloud-init.yaml

Meaning: Netplan expects configs to be owned by root and not writable by group/others. Overly permissive modes can cause the file to be ignored.
Decision: If you see group-writable or world-writable, fix perms and re-run generate/apply.

Task 13: Apply safely with rollback (remote-friendly)

cr0x@server:~$ sudo netplan try
Do you want to keep these settings?

Press ENTER before the timeout to accept the new configuration

Changes will revert in 120 seconds

Meaning: Netplan stages a change and will revert if you lose connectivity or don’t confirm.
Decision: Use this when you’re SSH’d in and value your time. If it fails to bring the link up, revert happens automatically—then you debug without a long drive to the datacenter.

Task 14: If DNS is “not applying,” inspect resolved state (not just resolv.conf)

cr0x@server:~$ resolvectl status
Global
         Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub
Current DNS Server: 10.20.0.53
       DNS Servers: 10.20.0.53 10.20.0.54

Meaning: systemd-resolved is in charge and knows your intended DNS servers.
Decision: If Netplan shows one DNS set but resolvectl shows another, the renderer (or NM) may be pushing DNS differently. Decide who owns DNS, and configure that layer correctly.

Task 15: Confirm the interface name actually matches what you configured

cr0x@server:~$ ip -br link
lo               UNKNOWN        00:00:00:00:00:00
eno1             UP             52:54:00:12:34:56
enp6s0           DOWN           52:54:00:aa:bb:cc

Meaning: Predictable interface names exist to reduce ambiguity, but it still bites during VM template changes and NIC reordering.
Decision: If you configured ens3 but the system has eno1, your YAML is a beautiful poem that nobody reads. Fix the match name.

Cloud-init: the quiet config rewriter

Cloud-init exists to make images portable: boot anywhere, discover metadata, configure networking, inject SSH keys, and generally do the things you don’t want to bake into a golden image.
The problem is that once a machine stops being “ephemeral,” cloud-init’s helpfulness becomes indistinguishable from sabotage.

On Ubuntu 24.04 cloud images, it’s common to find /etc/netplan/50-cloud-init.yaml. This file is not sacred.
It’s generated. And depending on your cloud-init configuration and datasource, it may be regenerated at boot.
If you manually edit it, you are negotiating with an automated system that never sleeps and doesn’t read your ticket comments.

How to tell if cloud-init will overwrite networking

The quickest sign is log entries showing it writing 50-cloud-init.yaml.
The second sign is that your changes “work until reboot.”
The third sign is that your config management “keeps losing,” even though the YAML file on disk looks right immediately after your automation runs.

Making it stick: pick one of these strategies

  1. Disable cloud-init networking on long-lived hosts.
    This is common in private datacenters where instance metadata is not the source of truth.
  2. Feed cloud-init the desired network configuration so it generates the right Netplan YAML.
    This is best when you actually want cloud portability and consistent first-boot behavior.
  3. Move your netplan file ordering so your file wins (e.g., 99-local.yaml), but only if you’re sure cloud-init isn’t regenerating other keys you depend on.
    This is a tactical fix. It’s not a clean architecture.

Joke #2: Cloud-init is like a well-meaning coworker who “fixed” your spreadsheet by sorting it—technically correct, operationally catastrophic.

Disabling cloud-init networking (carefully)

If the machine is no longer meant to be dynamically network-configured by metadata, disable that feature explicitly.
The exact mechanism can vary by environment, but the goal is consistent: cloud-init should stop generating and applying network configuration.

After changing cloud-init settings, reboot or rerun the relevant cloud-init stages only if you understand the impact. In production, prefer a controlled reboot window and console access.

Renderer wars: NetworkManager vs systemd-networkd

Netplan can target multiple renderers. The two you’ll meet in real life are systemd-networkd and NetworkManager.
Your job is not to pick the “best” one in abstract. Your job is to ensure exactly one of them owns each interface, predictably.

systemd-networkd: boring, deterministic, server-friendly

networkd tends to be preferred on headless servers because it’s simple, integrates with systemd, and does less “helpful” stuff.
When you declare static IPs, routes, and DNS in Netplan with renderer: networkd, you’re usually aiming for this world.

NetworkManager: flexible, user-friendly, and sometimes too clever

NM shines on desktops, Wi‑Fi, VPNs, and dynamic multi-network environments.
It also shows up on servers when teams standardize on nmcli and want consistent behavior across fleets.
That’s fine—until someone edits Netplan assuming networkd, while NM is actively managing the NIC.

The rule that prevents 80% of “Netplan didn’t apply” incidents

Pick one renderer for the host (or at least per interface), and enforce it.
Mixing is possible, but you need a reason, documentation, and tests. Otherwise it’s just entropy with YAML.

YAML traps, permissions, and why “valid” isn’t enough

YAML is friendly in the same way a butter knife is friendly: it won’t stab you, but it can still ruin your day if you treat it like a spoon.
With Netplan, the classic traps are subtle: wrong indentation, wrong key placement, deprecated keys, and files that Netplan silently ignores for safety.

Lexical ordering: the quiet override mechanism

Netplan reads /etc/netplan/*.yaml in lexical order. If two files define the same interface, later ones can override earlier declarations.
This is both a feature and a source of “I swear I changed it.”

The pragmatic approach for local overrides (when you must do it) is a clearly named tail file like 99-local.yaml.
But if you’re using automation, prefer a single authoritative file rather than a stack of overrides no one remembers.

Permissions: Netplan won’t trust a writable-by-others config

If someone made Netplan YAML group-writable to “help the team,” they accidentally created a security footgun.
Netplan mitigates that by ignoring unsafe configs. The result looks like: “apply did nothing,” with a side of confusion.

Deprecated keys and changed semantics

Netplan has evolved. Some keys that worked in older examples may be deprecated or discouraged. In modern Netplan, using explicit routes is often preferred over older gateway shortcuts in complex setups.
The fix is not to memorize every change—it’s to rely on netplan --debug generate and inspect generated configs.

Routes, DNS, and “it applied but still doesn’t work”

The most frustrating category is when Netplan genuinely applied your config—IP address is correct, link is up—but the system still can’t reach what it needs.
That’s not Netplan failing; it’s your network intent being incomplete.

Default route conflicts

Multi-NIC servers are common now: one interface for management, one for storage, one for front-end traffic, sometimes overlays or VPNs.
If two interfaces both offer a default route (via DHCP or static routes), Linux will pick one based on metrics and timing.
That can look like “random connectivity,” especially after reboots.

The fix is to be explicit: define which interface owns the default route, and consider using route metrics or policy routing if your design needs multiple uplinks.

DNS layering issues

People still treat DNS as “a file at /etc/resolv.conf.” On Ubuntu 24.04, that mindset is how you lose an hour.
Often /etc/resolv.conf is a stub pointing at systemd-resolved. NetworkManager might inject DNS too.
Netplan might set DNS servers, but the final behavior depends on the renderer and resolver stack.

Diagnose with resolvectl. Then decide who owns DNS: networkd+resolved, or NetworkManager, or a dedicated resolver setup. Make one path authoritative.

Three corporate mini-stories from the trenches

1) Incident caused by a wrong assumption: “Netplan is the network manager”

A team inherited a fleet of Ubuntu servers that “used Netplan.” They wanted to move a service VLAN and decided to update a static IP block.
The change request was simple: edit YAML, run netplan apply, verify. They did exactly that.

On half the hosts, nothing changed. On the other half, it changed and then reverted after reboot during a coordinated maintenance.
The team assumed “netplan apply is broken in 24.04,” because that’s the only visible tool they touched.

The post-incident analysis was embarrassingly straightforward: the hosts were a mixture of installs.
Some used renderer: networkd. Others had NetworkManager installed and managing the NICs due to earlier “standardization” work.
And the cloud-derived images had cloud-init generating 50-cloud-init.yaml.

The remediation was not heroic engineering. It was governance: pick one renderer per host role, remove the other manager where appropriate, disable cloud-init networking on long-lived nodes, and add a CI check to fail if multiple netplan files define the same interface.
The next maintenance went quietly. Which is the best kind.

2) Optimization that backfired: “Let’s speed up provisioning by letting cloud-init do networking forever”

Another org leaned hard into fast provisioning. They used cloud-init networking not only for first boot, but as a permanent “source of truth” for network config.
It worked great while every instance was ephemeral. Then they started keeping some nodes around for months because data gravity is real.

A network change happened: DNS servers rotated, routes changed, and a new MTU was required for an overlay. The automation updated Netplan YAML directly in /etc/netplan.
It looked right on disk, but the next reboot reverted it. Sometimes it reverted only parts—like DNS—creating a fun split-brain where connectivity half-worked.

The “optimization” was that cloud-init would always keep networking aligned with metadata. In practice, metadata and reality diverged: some instances had stale metadata, some had none, and some had a custom datasource.
Cloud-init was faithfully applying nonsense.

They fixed it by changing the optimization target. Instead of “always let cloud-init own networking,” they made first-boot config the boundary:
cloud-init configured networking once, then networking ownership moved to their config management system. They also added a guardrail: if cloud-init attempts to rewrite Netplan after first boot, it logs loudly and fails the deployment pipeline.
Provisioning stayed fast, but operations stopped being a guessing game.

3) Boring but correct practice that saved the day: netplan try + console access + staged rollout

A company needed to re-IP a management network across a large environment. The changes were routine but risky: you can’t fix a locked-out server over SSH.
The lead SRE insisted on three “boring” practices: always use netplan try for remote changes, always have out-of-band console access tested beforehand, and roll changes in small batches.

The first batch revealed a subtle mismatch: some NICs were renamed due to a hardware refresh, so the Netplan config targeted the wrong interface name.
On those nodes, netplan try failed to establish connectivity and rolled back automatically.
That rollback saved time and prevented a pile of emergency console sessions.

Because the rollout was staged, they adjusted the inventory mapping, regenerated YAML with correct interface names, and continued.
No big incident. No heroics. Just the kind of process that never wins awards but keeps systems available.

The lesson was not “netplan try is neat.” The lesson was that deterministic rollback mechanisms and staged rollouts are reliability features, not optional ceremony.

Common mistakes: symptoms → root cause → fix

1) “I edited the YAML and nothing happened”

Symptom: netplan apply returns without errors; IP remains unchanged.
Root cause: You edited a file that’s overridden by another YAML (lexical order), or Netplan ignored the file due to unsafe permissions.
Fix: Run sudo netplan get to confirm the merged config; ensure root ownership and safe perms; consolidate to one authoritative file or move your override to 99-local.yaml.

2) “Works until reboot, then reverts”

Symptom: Correct after apply; wrong after restart.
Root cause: cloud-init rewrites 50-cloud-init.yaml or re-runs network generation on boot.
Fix: Stop cloud-init from managing networking on that host role, or move network intent into cloud-init configuration so it generates the correct YAML every boot (if that’s truly desired).

3) “Netplan shows networkd but NM is actually connected”

Symptom: YAML declares renderer: networkd, but nmcli dev status shows connected on the NIC.
Root cause: Renderer mismatch; NetworkManager is managing the interface and can override addressing/DNS/routes.
Fix: Decide ownership. Either switch Netplan to renderer: NetworkManager (and manage via NM), or configure NM to not manage that device and use networkd consistently.

4) “IP applied but no internet / no route”

Symptom: Interface has the correct address; cannot reach beyond subnet.
Root cause: Missing default route, wrong gateway, or competing default routes on another interface.
Fix: Use ip route show. Define the default route explicitly under routes:, adjust metrics, and ensure only one intended default route exists (unless you’re doing policy routing on purpose).

5) “DNS doesn’t change”

Symptom: YAML lists DNS servers; /etc/resolv.conf shows something else.
Root cause: systemd-resolved stub, NM DNS injection, or resolver config layered above Netplan.
Fix: Check resolvectl status and decide the DNS owner. Align Netplan renderer with that owner; don’t hand-edit /etc/resolv.conf and expect it to persist.

6) “Netplan apply breaks SSH”

Symptom: Apply drops the connection; system might or might not recover.
Root cause: Remote changes applied without rollback; wrong interface name; accidental removal of address/routes.
Fix: Use netplan try. Ensure console access exists. Make changes incrementally: address first, route second, DNS last.

Checklists / step-by-step plan

Checklist A: Make Netplan changes stick on a long-lived server

  1. List Netplan YAML files: identify overrides and generated files (ls -l /etc/netplan).
  2. Confirm the merged intent (sudo netplan get). If it’s not what you expect, stop.
  3. Verify renderer ownership: networkctl status IFACE and nmcli dev status.
  4. Check for cloud-init ownership: cloud-init status --long, logs for writing Netplan YAML.
  5. Decide your authority model:
    • Cloud-init owns first boot only, then CM owns it; or
    • Cloud-init owns networking always (rarely wise for long-lived nodes); or
    • No cloud-init networking, ever (common on bare metal).
  6. Consolidate YAML into one file per host role whenever possible.
  7. Set safe permissions: root-owned, not group/other-writable.
  8. Run sudo netplan --debug generate and inspect generated backend config.
  9. Apply with sudo netplan try if remote.
  10. Validate with ip -br addr, ip route, and resolvectl.

Checklist B: Safe change sequence for static IP migration

  1. Confirm interface names and MACs match your inventory (ip -br link).
  2. Add the new IP as secondary (if your environment supports it) before removing old IP, to avoid drop.
  3. Update routes, verify connectivity to gateway and next-hop networks.
  4. Update DNS last, because DNS failures are confusing and look like routing.
  5. Schedule a reboot test for persistence once changes are stable.

Checklist C: When you suspect conflicts (NM vs networkd vs cloud-init)

  1. Prove who owns the interface (networkctl and nmcli).
  2. Prove who writes Netplan YAML (cloud-init logs, config management logs, file mtimes).
  3. Prove what Netplan generated (/run/systemd/network/ or NM connection profiles depending on renderer).
  4. Remove or disable the losing side. Partial coexistence is how you get intermittent failures.

FAQ

1) Why does netplan apply succeed but nothing changes?

Because Netplan can parse and generate configs, yet the backend might not manage that interface, or another YAML overrides yours.
Confirm with netplan get, then check networkctl/nmcli, then inspect generated files.

2) Which file should I edit: 00-installer-config.yaml or 50-cloud-init.yaml?

Neither by default. Edit the file that is authoritative for your host’s ownership model.
If cloud-init writes 50-cloud-init.yaml on boot, manual edits there are a temporary illusion.
Prefer a single stable file managed by your configuration system, or feed cloud-init the correct network config.

3) How do I know which renderer I’m using?

Don’t trust memory. Use netplan get for declared renderer, and verify actual control with networkctl status IFACE and nmcli dev status.
The interface owner is what matters.

4) Can I mix NetworkManager and systemd-networkd on the same host?

You can, but you probably shouldn’t unless you have a clear reason and tests.
Mixed ownership increases the odds of route/DNS confusion and “applied but reverted” behavior.
If you do mix, make sure each interface is clearly managed by exactly one service.

5) Why does DNS in Netplan not match /etc/resolv.conf?

Because /etc/resolv.conf is often a stub or symlink under systemd-resolved, and NetworkManager can inject DNS too.
Use resolvectl status to see the effective resolvers, then align your renderer and resolver stack.

6) Is netplan try safe to use over SSH?

Yes—that’s the point. It applies changes with a timeout rollback unless you confirm.
It’s not magic, but it’s the difference between a controlled change and an unscheduled trip to console access.

7) Why do my changes revert only sometimes?

Because multiple systems are racing: DHCP renewals, NetworkManager autoconnect, cloud-init boot stages, or even automation reapplying.
Intermittent behavior is a hallmark of conflicting ownership. Identify the writer and remove the conflict.

8) Where does Netplan write generated configuration?

For networkd, commonly under /run/systemd/network/ as 10-netplan-*.network.
For NetworkManager, it translates to NM connection profiles (managed by NM). The exact artifacts depend on renderer, so inspect what’s present and active on the system.

9) What’s the safest way to ensure persistence across reboot?

Ensure a single source of truth: one Netplan YAML file managed by root and your automation, one renderer owning the interface, and cloud-init networking either disabled or correctly configured.
Then reboot during a maintenance window and verify runtime state.

10) Should I hand-edit the generated files in /run/systemd/network/?

No, not as a fix. Those files are ephemeral and will be regenerated.
Edit the Netplan YAML (or cloud-init input) so the generated output is correct.
The only time to edit generated files is as a temporary diagnostic experiment when you understand you’re stepping outside the system.

Conclusion: next steps that won’t page you later

When Netplan changes don’t apply on Ubuntu 24.04, the system is usually doing exactly what it was told—just not by you, and not in the layer you’re editing.
The fix is rarely “try again.” The fix is to establish ownership and make the configuration pipeline boring.

Do this next, in order:

  1. Run sudo netplan get and confirm your intent is actually in the merged config.
  2. Confirm interface ownership with networkctl and nmcli; pick one renderer and enforce it.
  3. Check for cloud-init rewriting and decide whether cloud-init should own networking on this host.
  4. Inspect generated backend config under /run to prove what will be applied.
  5. Use netplan try for remote changes, then validate with ip/resolvectl.

Make it deterministic. Make it testable. Make it so the next person on-call doesn’t have to learn your infrastructure by reading log files at 03:00.

← Previous
ZFS vs btrfs: Snapshots, RAID, Recovery—Which One Bites Less?
Next →
Ubuntu 24.04: IPv6 firewall forgotten — close the real hole (not just IPv4) (case #12)

Leave a comment