You wanted a Linux environment on Windows that behaves like the servers you actually run. Then you typed
systemctl status in WSL and got the classic deadpan: “System has not been booted with systemd.”
Suddenly your “quick dev box” turns into a weird petri dish of half-working daemons, missing logs, and services
that only start if you remember to kick them.
The good news: systemd in WSL is now a real supported mode, not a pile of hacks. The bad news: it’s still not a
full VM, it still isn’t your production kernel, and some things fail in ways that feel personal.
This is the practical map—what works, what breaks, and how to debug it like you’re on call.
The state of play: systemd in WSL today
If you’re on WSL2 and a reasonably current WSL package, you can enable systemd per distribution using
/etc/wsl.conf. WSL will then launch systemd as PID 1 inside the distro, and the rest of your services
behave like they do on a normal Linux machine—mostly.
“Mostly” is doing work here. WSL is not a generic hypervisor. It’s a Linux kernel hosted by Windows with a tight
integration layer. Some plumbing is different by design: networking is virtualized; mounts are translated; the
lifecycle is session-driven; Windows is still holding the keys to power management and host networking.
My opinionated guidance:
-
Use systemd in WSL when you need realistic service management (Docker Engine, podman, sshd,
timers, journald, dbus-mediated services). -
Don’t use WSL as production (yes, people try). Treat it as a dev/CI environment with
production-like behavior, not a production substrate. -
Keep a boundary between “Linux services” and “Windows host responsibilities”. When you blur that
boundary, troubleshooting becomes performance art.
How systemd actually boots inside WSL (and why that matters)
Traditionally, WSL launched your shell (or configured command) directly; there was no init system. That meant:
no PID 1 doing service supervision, no consistent boot target, and no journald-managed logs. If a daemon died,
it died quietly or left behind a stale PID file like a tiny tombstone.
With systemd enabled, WSL starts systemd as PID 1. D-Bus comes along for the ride, and systemd units can run
normally. You now have a coherent way to start services, define dependencies, set restart policies, and use
timers instead of “I put it in my shell rc file and hoped.”
But WSL’s “boot” is not a hardware boot. It’s a user-space start inside a managed environment. Two consequences
show up immediately:
-
Lifecycle is tied to the WSL instance. When WSL stops (or is terminated), your services stop.
When it starts, unit ordering happens, but it’s not the same as a full server boot with device discovery. -
Integration layers can override Linux defaults. DNS resolution, network interfaces, and mounts
can behave differently than on a normal distribution install.
The practical lesson: treat systemd in WSL as “real Linux service management running in a constrained host
container,” not as “I installed Ubuntu.” This framing prevents a lot of wrong assumptions.
Interesting facts and historical context
These are short, concrete points that explain why this topic is still spicy:
-
WSL1 didn’t use a Linux kernel at all; it translated Linux syscalls to Windows NT kernel calls.
That model was clever, but it limited compatibility for kernel-dependent workloads. -
WSL2 switched to a real Linux kernel running in a lightweight VM. That’s why containers and
cgroups became viable. -
For years, systemd wasn’t supported in WSL, partly because WSL did not present itself as a
traditional booted system with PID 1 ownership and device management. -
Early “systemd on WSL” attempts relied on wrapper scripts and manual PID 1 tricks. They worked
until they didn’t, and they broke in wonderfully confusing ways. -
Linux distros increasingly expect systemd. Even when a service can run without it, packaging
and service integration often assume systemd units exist. -
systemd is not just “an init system”; it’s a suite (journald, resolved, networkd, logind, timers,
unit dependencies). In WSL, you might want exactly some of these and not others. -
cgroup v2 is the modern default in many environments. That matters for Docker, podman, and any
resource control; WSL2 has steadily improved cgroup support. -
WSL networking historically used NAT with a changing VM IP. That’s fine for outbound calls and
painful for inbound services unless you plan for it. -
WSL regenerates parts of your Linux config (notably
/etc/resolv.conf) unless you
tell it not to, which collides with systemd-resolved expectations.
What works well (enough) with systemd in WSL
Service management that behaves like Linux
The headline benefit is obvious: systemctl works. Units can start at “boot” (WSL instance start),
restart on failure, and express dependencies. That means:
- sshd can be a service, not a manual command you forget.
- cron-like work can be a systemd timer with logging and failure handling.
- dbus-using services behave like they do on servers.
journald makes debugging less superstitious
Without journald, you end up scraping random files under /var/log and wondering why nothing is there.
With systemd, you can query structured logs per unit, per boot, and per time range. It’s not magic, but it’s
predictable—my favorite flavor of magic.
Docker Engine and friends can be more “native-Linux”
Many teams use Docker Desktop with WSL integration, which can work fine without systemd. But if you want to run
Docker Engine inside the distro like you would on a Linux host, systemd support removes a lot of papercuts:
unit files, socket activation patterns, and predictable startup.
Systemd timers are a sane alternative to “backgrounding”
WSL sessions come and go. Background processes that you started manually can disappear, or keep running and
surprise you later. Timers + services give you idempotent, inspectable scheduling.
What still breaks or stays weird
DNS and resolv.conf: the perennial footgun
WSL likes to manage DNS for you. systemd-resolved likes to manage DNS for you. Put them together and you can get:
missing name resolution, different behavior between host and WSL, or “it works until I reconnect to Wi‑Fi.”
The fix is usually not complicated, but it requires choosing who owns the configuration. Either let WSL generate
/etc/resolv.conf and don’t run systemd-resolved, or disable WSL’s generation and wire it correctly
for resolved. Indecision is the worst configuration management tool.
Networking for inbound services is still not “server-like”
Inbound networking depends on how Windows exposes ports to the WSL VM. Your service might be listening on
127.0.0.1 inside WSL and unreachable from Windows, or vice versa. Sometimes the “right” bind address
is counterintuitive. Sometimes it changes with updates.
Mount semantics: /mnt/c is convenient and slow
The Windows filesystem mounted under /mnt/c is great for interoperability and terrible for workloads
that want Linux filesystem semantics and speed (metadata-heavy builds, databases, large Git repos). systemd doesn’t
fix that; it just makes the pain more organized.
Not all kernel knobs behave like your data center
WSL’s kernel is real, but it’s managed. Some modules, sysctls, and device behaviors may differ from your fleet.
If you’re testing anything kernel-sensitive, validate on a real Linux VM too.
Joke #1: WSL with systemd is like giving your dev laptop a pager—suddenly it has opinions about uptime.
Practical tasks: commands, outputs, and decisions
The fastest way to understand whether your setup is “healthy enough” is to interrogate it like you would a
production host. Below are real commands, what realistic output looks like, and the decision you make.
Task 1: Confirm you’re on WSL2 (not WSL1)
cr0x@server:~$ uname -r
5.15.133.1-microsoft-standard-WSL2
What it means: If the kernel string includes microsoft-standard-WSL2, you’re on WSL2.
Decision: If you’re on WSL1, stop. systemd support is for WSL2. Convert the distro to WSL2 before continuing.
Task 2: Verify PID 1 is systemd
cr0x@server:~$ ps -p 1 -o pid,comm,args
PID COMMAND COMMAND
1 systemd /sbin/init
What it means: PID 1 being systemd is the whole ballgame.
Decision: If PID 1 is init or bash, systemd is not enabled (or WSL didn’t restart cleanly).
Task 3: Check WSL systemd enablement configuration
cr0x@server:~$ cat /etc/wsl.conf
[boot]
systemd=true
What it means: This is the per-distro switch.
Decision: If missing/false, set it, then restart the WSL instance from Windows (or terminate that distro) so it takes effect.
Task 4: Prove systemd is functional (not just present)
cr0x@server:~$ systemctl is-system-running
running
What it means: running is healthy. You might also see degraded if a unit failed.
Decision: If degraded, immediately list failed units and fix the first failure that matters.
Task 5: List failed units and pick the right fight
cr0x@server:~$ systemctl --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● systemd-resolved.service loaded failed failed Network Name Resolution
● snapd.service loaded failed failed Snap Daemon
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state.
SUB = The low-level unit activation state, often a more detailed state.
2 loaded units listed.
What it means: Some defaults fail in WSL, depending on distro and config.
Decision: If the failed unit is irrelevant (like snapd in many WSL dev setups), mask it. If it’s resolved, fix DNS ownership first.
Task 6: Inspect why a unit failed (systemd’s perspective)
cr0x@server:~$ systemctl status systemd-resolved.service --no-pager -l
● systemd-resolved.service - Network Name Resolution
Loaded: loaded (/lib/systemd/system/systemd-resolved.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Mon 2026-02-05 09:12:17 UTC; 25s ago
Docs: man:systemd-resolved.service(8)
Process: 412 ExecStart=/lib/systemd/systemd-resolved (code=exited, status=1/FAILURE)
Status: "Failed to read /etc/resolv.conf: Too many levels of symbolic links"
What it means: A classic loop: resolved wants a symlink to stub resolv.conf, WSL may rewrite it, and you get a symlink circus.
Decision: Decide who owns /etc/resolv.conf. Then implement cleanly and restart the unit.
Task 7: Check who owns DNS right now
cr0x@server:~$ ls -l /etc/resolv.conf
-rw-r--r-- 1 root root 116 Feb 5 09:10 /etc/resolv.conf
What it means: A regular file typically suggests WSL generated it (or you pinned it manually).
Decision: If you want systemd-resolved, you generally want /etc/resolv.conf to be a symlink to resolved’s stub or generated file—without WSL rewriting it.
Task 8: Confirm journald is capturing logs (and not silently useless)
cr0x@server:~$ journalctl -b -n 10 --no-pager
Feb 05 09:12:01 server systemd[1]: Starting Network Name Resolution...
Feb 05 09:12:17 server systemd[1]: systemd-resolved.service: Main process exited, code=exited, status=1/FAILURE
Feb 05 09:12:17 server systemd[1]: systemd-resolved.service: Failed with result 'exit-code'.
Feb 05 09:12:17 server systemd[1]: Failed to start Network Name Resolution.
Feb 05 09:12:20 server systemd[1]: Starting OpenSSH Daemon...
Feb 05 09:12:20 server sshd[522]: Server listening on 0.0.0.0 port 22.
Feb 05 09:12:20 server systemd[1]: Started OpenSSH Daemon.
What it means: You have boot-scoped logs. That’s your root of truth now.
Decision: If journald is empty or errors, fix logging before chasing service bugs. No logs means you’re debugging by vibe.
Task 9: Check dbus health (lots of things quietly depend on it)
cr0x@server:~$ systemctl status dbus --no-pager
● dbus.service - D-Bus System Message Bus
Loaded: loaded (/lib/systemd/system/dbus.service; static)
Active: active (running) since Mon 2026-02-05 09:12:00 UTC; 1min 2s ago
TriggeredBy: ● dbus.socket
Docs: man:dbus-daemon(1)
Main PID: 210 (dbus-daemon)
Tasks: 1 (limit: 18947)
Memory: 2.3M
CGroup: /system.slice/dbus.service
└─210 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
What it means: dbus is running, socket activation is working.
Decision: If dbus is dead, expect “weird” everywhere: hostnamectl, timedatectl, resolved, logind-adjacent tools.
Task 10: Verify cgroup version (containers care)
cr0x@server:~$ stat -fc %T /sys/fs/cgroup
cgroup2fs
What it means: cgroup2fs means cgroup v2. Older setups might show tmpfs and require different container config.
Decision: If you run Docker Engine or podman, align your cgroup driver expectations with what’s available.
Task 11: If Docker is installed, check the daemon unit
cr0x@server:~$ systemctl status docker --no-pager
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; preset: enabled)
Active: active (running) since Mon 2026-02-05 09:12:45 UTC; 35s ago
TriggeredBy: ● docker.socket
Docs: man:dockerd(8)
Main PID: 900 (dockerd)
Tasks: 18
Memory: 78.4M
CGroup: /system.slice/docker.service
└─900 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
What it means: Running daemon, socket activation, normal-looking cgroup path.
Decision: If Docker fails, check cgroups, iptables/nft assumptions, and whether you’re colliding with Docker Desktop integration.
Task 12: Check what’s listening and where (127.0.0.1 vs 0.0.0.0)
cr0x@server:~$ ss -lntp
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=522,fd=3))
LISTEN 0 4096 127.0.0.1:5432 0.0.0.0:* users:(("postgres",pid=1203,fd=6))
What it means: sshd is reachable on all interfaces inside WSL; postgres is bound to loopback only.
Decision: If Windows can’t reach a service, first check bind addresses. “It’s running” is not “it’s reachable.”
Task 13: Confirm mount type for your project directory
cr0x@server:~$ df -T .
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sdd ext4 263174212 18234568 231513928 8% /
What it means: If you’re on ext4 under the Linux filesystem, performance is usually decent.
Decision: If your project lives under /mnt/c (typically 9p or similar), move it into the distro filesystem for heavy builds/databases.
Task 14: Identify if WSL is rewriting resolv.conf behind your back
cr0x@server:~$ grep -n "generateResolvConf" /etc/wsl.conf || true
What it means: No output usually means default behavior (WSL may generate resolv.conf).
Decision: If you want deterministic DNS with systemd-resolved, explicitly set generateResolvConf=false and manage the file/symlink yourself.
Task 15: Check boot performance and blame the slow units
cr0x@server:~$ systemd-analyze blame | head
3.214s snapd.service
1.122s docker.service
812ms systemd-resolved.service
402ms ssh.service
211ms systemd-journald.service
What it means: You have unit-level timing. This is gold when “WSL feels slow today.”
Decision: Mask or disable units that are irrelevant in WSL and slow down every start.
Task 16: Prove a timer works and produces logs
cr0x@server:~$ systemctl list-timers --all --no-pager | head
NEXT LEFT LAST PASSED UNIT ACTIVATES
Mon 2026-02-05 09:20:00 UTC 2min 10s Mon 2026-02-05 09:15:00 UTC 2min ago apt-daily.timer apt-daily.service
Mon 2026-02-05 10:00:00 UTC 42min Mon 2026-02-05 09:00:00 UTC 17min ago man-db.timer man-db.service
What it means: Timers are scheduled and tracked.
Decision: If you rely on scheduled tasks, prefer timers over shell hacks. If timers spam logs or burn CPU, adjust or disable them.
Fast diagnosis playbook
When systemd-in-WSL is “broken,” it rarely fails in a novel way. Most outages are the same small set of failure
modes wearing different hats. Here’s the order that gets you to the bottleneck fast.
First: Is systemd actually PID 1 and is the system running?
ps -p 1 -o comm,args→ if not systemd, stop and fix enablement/restart WSL.systemctl is-system-running→ ifdegraded, list failed units and address the first relevant one.
Second: Are the logs usable?
journalctl -b -n 50→ if empty, you don’t have observability; fix journald/storage permissions or your expectations.journalctl -u <service> -b→ if the service is failing, prefer this over random log files.
Third: Is it DNS/networking or is it the service itself?
resolvectl status(if using resolved) or inspect/etc/resolv.conf.ss -lntpto confirm bind addresses and listeners.ip addr+ip routeto confirm interfaces and default route exist.
Fourth: Is performance being killed by mounts or a slow unit?
df -Tand path sanity: don’t run databases on/mnt/cif you want joy.systemd-analyze blameto identify slow startup units; disable what you don’t need.
Fifth: Container workloads—confirm cgroups and daemon assumptions
stat -fc %T /sys/fs/cgroupto confirm cgroup v2.systemctl status dockerandjournalctl -u docker -bfor daemon errors.
The goal is not to “try random fixes.” The goal is to classify the problem in under five minutes.
Three corporate mini-stories (all true-to-life, none traceable)
Mini-story 1: The incident caused by a wrong assumption
A finance-adjacent team standardized on WSL for development because it reduced laptop variance. They added systemd
so services could start “like in prod.” It worked for months. Then a new developer joined, pulled the repo, ran
the bootstrap, and their local service mesh never came up correctly.
The wrong assumption was subtle: they assumed WSL “boot” was equivalent to a VM boot and that systemd units would
be running before the developer’s first shell prompt reliably. In practice, WSL instance startup is triggered by
launching the distro, and the timing depends on what else the host is doing. The bootstrap script immediately ran
a client command against a service that was still starting, failed once, and then cached the failure state in a
local config.
The symptom looked like a network problem: timeouts, refused connections, and one person insisting “it works on my
machine,” which is both a statement and a lifestyle. The root cause was just ordering: services were fine but not
ready when queried, and the script had no retries/backoff.
The fix was boring: add readiness checks (systemd After= and Wants= where appropriate, plus
a client-side retry loop with timeouts), and stop treating first-boot behavior as deterministic. They also added
a single “health” command that checked critical units and printed actionable output.
Mini-story 2: The optimization that backfired
Another team was chasing faster builds. They noticed that keeping source under /mnt/c made it easier to
edit from Windows tools, but builds were slow. Someone proposed an “optimization”: symlink build outputs and caches
back to Windows storage so they’d survive WSL resets and be shared across distros.
It sped up a few small builds and then detonated in the way only I/O optimizations can: the largest builds started
failing intermittently. They saw corrupted artifact caches, spurious permission errors, and file watcher storms.
systemd wasn’t the direct cause, but systemd made it more visible because background services were now reliably
running—indexers, language servers, and file watchers were all active and amplifying the filesystem’s worst traits.
The backfire was classic: the Windows-mounted filesystem had different semantics and performance characteristics.
Metadata operations were slower, file locks behaved differently, and the entire pipeline was now sensitive to
timing. The team spent a week “tuning” before admitting the plan was flawed.
The fix was to keep heavy build caches and repos inside the Linux filesystem (ext4 in the WSL virtual disk), then
expose only the final artifacts back to Windows if needed. They kept an explicit “sync” step rather than trying
to make two worlds share a hot directory structure.
Mini-story 3: The boring but correct practice that saved the day
A platform group maintained a WSL-based dev environment for dozens of engineers. The setup included systemd,
Docker, a local registry mirror, and a few internal agents. This was the kind of environment that works until one
Tuesday morning when everyone updates something and chaos blooms.
Their boring practice: they kept a minimal, versioned “golden” diagnosis script that ran the same checks every
time: PID 1, failed units, last boot logs, DNS ownership, listening sockets, disk type for working directory, and
container daemon health. It printed simple pass/fail lines and suggested the next command.
When a WSL update changed networking behavior enough to break inbound connections for a subset of laptops, the
group didn’t spend hours arguing about whose machine was cursed. They ran the script, saw that services were
listening on 127.0.0.1, and Windows could no longer reach them. The pattern was consistent.
Because they had a baseline and logs, they could craft a targeted fix: adjust bind addresses for dev services,
update firewall rules on the host, and document a single workaround for affected users while waiting for a more
permanent Windows-side change. It wasn’t glamorous, but it prevented a slow-motion productivity outage.
Common mistakes: symptom → root cause → fix
1) “System has not been booted with systemd”
Symptom: systemctl errors, units can’t be managed.
Root cause: systemd not enabled, or WSL instance not restarted after config change.
Fix: Set [boot] systemd=true in /etc/wsl.conf and restart the WSL instance. Confirm PID 1 is systemd.
2) DNS works once, then breaks after network changes
Symptom: apt update fails, internal names stop resolving after VPN/Wi‑Fi changes.
Root cause: Ownership conflict: WSL auto-generating /etc/resolv.conf while systemd-resolved expects control (or vice versa).
Fix: Choose one owner. If using resolved, set generateResolvConf=false and wire /etc/resolv.conf to resolved’s expected file without loops. If not using resolved, disable/mask it.
3) Services run but are unreachable from Windows
Symptom: curl localhost:PORT works inside WSL but not from Windows, or the reverse.
Root cause: Bind address mismatch (127.0.0.1 vs 0.0.0.0), NAT/port forwarding assumptions, or host firewall.
Fix: Use ss -lntp to see where it’s bound. Bind dev services intentionally. Don’t guess. If needed, adjust Windows firewall rules.
4) WSL feels slow “after enabling systemd”
Symptom: Distro startup feels heavier; commands lag; CPU spikes on launch.
Root cause: Unneeded services auto-starting (snapd, apt timers, indexers), or filesystem choice (working under /mnt/c).
Fix: Use systemd-analyze blame and systemctl --failed. Disable/mask irrelevant units. Move heavy projects into the Linux filesystem.
5) Docker inside WSL fails oddly (iptables, cgroups, permissions)
Symptom: Docker daemon won’t start, containers can’t create networks, errors about cgroups or nft.
Root cause: Mismatched expectations: cgroup v2 vs v1, firewall backend differences, or conflict with Docker Desktop integration.
Fix: Confirm cgroup mode, inspect journalctl -u docker -b, and pick one model: Docker Desktop-managed or distro-managed. Mixing increases entropy.
6) systemd-resolved fails with symlink loops
Symptom: resolved won’t start; errors mention /etc/resolv.conf symlink depth.
Root cause: Incorrect symlink chain, often due to WSL regeneration or manual edits layered on top of distro defaults.
Fix: Remove the loop and rebuild the intended state. Ensure exactly one owner of the file and that WSL isn’t regenerating it against your wishes.
7) Timers run when you don’t want them to
Symptom: Random CPU/network usage; apt-daily wakes up during demos; fans spin like a jet.
Root cause: Default distro timers aren’t tuned for “laptop dev environment.”
Fix: Disable or reschedule timers you don’t need. Validate with systemctl list-timers and check logs for actual impact.
Checklists / step-by-step plan
Step-by-step: enabling systemd safely
-
Confirm WSL2: run
uname -rand look for WSL2 kernel string. -
Enable systemd in the distro: edit
/etc/wsl.conf:cr0x@server:~$ sudo sh -c 'cat > /etc/wsl.conf <<EOF [boot] systemd=true EOF' - Restart the WSL instance: from Windows, terminate the distro or restart WSL so the change applies.
-
Verify PID 1:
ps -p 1 -o comm,args. -
Check for degraded state:
systemctl is-system-running, thensystemctl --failed. -
Make DNS a deliberate choice: decide whether to use systemd-resolved. If you don’t need it, mask it and move on.
cr0x@server:~$ sudo systemctl mask systemd-resolved.service Created symlink /etc/systemd/system/systemd-resolved.service → /dev/null. -
Disable slow/unneeded units: use
systemd-analyze blameand disable the offenders you don’t use. -
Move performance-sensitive work into ext4: keep repos/databases under the Linux filesystem, not
/mnt/c.
Checklist: production-like dev setup (the sane version)
- Critical services have systemd units with restart policies.
- Logs are queried via
journalctl, not “whatever file exists.” - DNS ownership is explicit and documented.
- Inbound networking assumptions are tested (from Windows and inside WSL).
- Timers are pruned to what you actually want.
- Repos and databases live on the Linux filesystem.
- Container toolchain has one owner (Docker Desktop integration or in-distro Docker), not both.
Checklist: when you should not bother with systemd in WSL
- You just need a shell, compilers, and a few CLI tools.
- Your workflow is “run one foreground process” and kill it when done.
- You rely on Windows-native services and only use WSL for scripting.
FAQ
1) Do I need systemd in WSL to run Docker?
Not strictly. Docker Desktop can integrate with WSL without you managing systemd.
But if you want a Linux-like Docker Engine service inside the distro with standard unit management, systemd helps.
Pick one model and commit to it.
2) Why does systemctl work but my service still doesn’t start on WSL “boot”?
WSL “boot” is “instance start,” and timing varies. Ensure your unit has the right dependencies, and use readiness checks.
Then verify systemctl is-enabled and inspect journalctl -u your.service -b.
3) Should I run systemd-resolved in WSL?
Only if you need it and you’re willing to own DNS configuration explicitly. Many dev environments are fine letting WSL manage
/etc/resolv.conf and masking resolved. The worst option is running both “half on.”
4) Why are my logs missing under /var/log?
With systemd, many services log to journald, not to flat files. Use journalctl. If you want file logs, configure the service
or journald forwarding deliberately.
5) Is systemd in WSL stable enough for daily work?
Yes for dev workflows, especially when you keep configs simple and avoid filesystem and DNS footguns.
No if your definition of “stable” includes “behaves exactly like our production kernel and network.”
6) Why is my service listening but unreachable from Windows?
Usually it’s binding to 127.0.0.1 inside WSL, or Windows firewall/port forwarding behavior changed. Start with
ss -lntp and bind deliberately. Confirm from both sides.
7) Will enabling systemd slow down WSL?
It can, if your distro enables a bunch of services/timers you don’t need. The fix is straightforward: measure with
systemd-analyze blame, then disable/mask the noise.
8) Can I use systemd timers instead of cron in WSL?
Yes, and you probably should. Timers have better logging and dependency control. Just remember that if the WSL instance isn’t running,
timers won’t fire until it is.
9) What’s the single most common systemd-in-WSL failure mode?
DNS ownership conflicts, followed by slow or failing “extra” units (snapd, auto-updaters) that weren’t designed for a WSL lifecycle.
10) If this is for dev, why be so strict about diagnostics?
Because dev downtime is real downtime—just paid in engineers instead of customers. Also because a clean diagnostic path prevents the team
from cargo-culting fixes that break later.
Next steps you can actually take
systemd in WSL is now a legitimate tool: you can manage services, get coherent logs, and run a more production-shaped dev environment
without kludges. But WSL still has its own physics. Treat it like a constrained Linux host with Windows as the platform—not as a tiny
server that just happens to live in a laptop.
Practical next steps:
- Enable systemd, restart WSL, and verify PID 1 +
systemctl is-system-running. - Kill the irrelevant services: disable/mask what doesn’t belong in your dev environment.
- Pick a DNS owner and enforce it. Half measures create the hardest incidents.
- Move performance-sensitive repos/databases off
/mnt/cand into ext4. - Write a tiny team “health check” script that runs the same 8–10 commands every time.
Quote (paraphrased idea), attributed: Werner Vogels is often credited with the principle that you should “build it, run it, and own it”
as part of reliability culture.
Joke #2: If you mask snapd in WSL and nothing breaks, you have discovered the rarest creature in ops: a dependency you didn’t need.