ZFS doesn’t really “lose” your datasets. It just gets extremely good at making you feel like it did. One minute your data is under /srv/app, the next minute it’s “gone,” your service is empty, and your brain is trying to remember whether you angered the storage gods.
This is the mountpoint trap: a surprisingly ordinary set of behaviors—mount overlays, legacy mountpoints, canmount settings, boot ordering, and a few well-meaning automation tools—that can hide a perfectly healthy dataset behind another filesystem. In production, that can look exactly like data loss until you remember one rule: in Unix, the last mount wins.
What the mountpoint trap is (and why it fools smart people)
The mountpoint trap happens when a ZFS dataset is mounted somewhere you didn’t expect—or can’t mount where you do expect—because the mount location is already occupied. A different filesystem (another ZFS dataset, a tmpfs, an old ext4 partition, a container bind mount, a systemd unit, you name it) gets mounted “on top” of the directory where you thought your dataset lived.
From the shell, it looks like your data has vanished:
lsshows an empty directory.- Applications start creating new directories and files, happily, in the wrong place.
- The “missing” data reappears if you unmount the overlaying filesystem—or disappears again when something mounts at boot.
This is why the trap is so convincing: it’s not a ZFS bug. It’s the kernel doing exactly what it’s supposed to do. The dataset is still there, still consistent, still referenced by its pool. It’s just not currently visible at the path you’re staring at.
First joke (mandatory, short, and earned): ZFS didn’t eat your data; it just put a blanket over it and watched you panic.
What “disappearing” usually means in practice
Most of the time, one of these is true:
- The dataset is mounted, but not where you think (
mountpointchanged or inherited). - The dataset is not mounted because ZFS was told not to (
canmount=offornoauto). - The dataset is set to
mountpoint=legacy, so ZFS won’t mount it unless/etc/fstab(or your distro’s equivalent) does. - The dataset wants to mount at a path that is already a mountpoint of something else (overlay).
- The pool imported, but mounting failed due to missing keys (encryption), missing mount directories, or boot ordering.
The operational danger isn’t just confusion. It’s split-brain filesystem state at the application layer: your service starts writing new data to an underlying directory (usually on root filesystem), while the real dataset sits quietly elsewhere. You now have two realities: one your app sees, and one ZFS protects.
How ZFS mounting actually works
ZFS datasets are filesystems with properties. The property at the center of our drama is mountpoint, paired with canmount. Understanding the interaction is half the battle; the other half is remembering that ZFS lives in a broader OS that can mount other things whenever it feels like it.
The three pillars: mountpoint, canmount, and “who mounts it”
mountpoint is the path where ZFS will mount a dataset if it is mounted by ZFS. Common values:
- A real path, e.g.
/srv/db none(never mount)legacy(ZFS defers to traditional mount mechanisms like/etc/fstab)
canmount controls whether the dataset is eligible to mount:
on: mountable (default)off: never mount (even if it has a mountpoint)noauto: do not auto-mount at boot, but allow manualzfs mount
Who mounts it depends on your OS integration. On many systems, ZFS mount is handled by init scripts or systemd services that run after pool import. For legacy datasets, system mount units or mount -a will do it. In container platforms, mount namespaces and bind mounts can add a second layer of “it’s mounted, just not in the place you’re looking.”
Overlays: the Unix law that causes most “vanishing” acts
When you mount filesystem B at directory /srv/app, whatever was previously visible at /srv/app is hidden until you unmount B. That includes:
- files that were created in that directory on the root filesystem,
- or an earlier mount (like your ZFS dataset),
- or a bind mount from somewhere else.
This is not ZFS-specific; it’s VFS behavior. ZFS just makes it easy to create many mountable filesystems, so it’s easier to accidentally stack them.
Second joke (also mandatory, short): If mounts had a dating app, their bio would be “I hide things.”
Inheritance: the quiet footgun
ZFS properties can be inherited down the dataset tree. That’s a feature—until you forget it exists. If you set mountpoint on a parent dataset, children inherit it unless explicitly set otherwise. You can end up with nested mountpoints like:
pool/servicesmounted at/srvpool/services/appmounted at/srv/apppool/services/app/logsmounted at/srv/app/logs
This can be perfectly sane. It can also become chaos when someone changes the parent mountpoint during a migration and forgets that children inherit—suddenly everything moves, or attempts to mount at paths that now collide with existing mounts.
legacy mountpoints: old-school control, new-school confusion
mountpoint=legacy means: “ZFS, don’t automatically mount this. I’ll handle it via the OS mount system.” This is sometimes used for strict control, special ordering, or compatibility with management tools.
The trap: if you rely on legacy, but your /etc/fstab entry is missing, wrong, or delayed at boot, the dataset will exist but not be mounted. Meanwhile, the directory is just a directory—so your application starts writing there like nothing is wrong.
Interesting facts and historical context
Some context helps, because ZFS’s mount behavior isn’t random—it’s the result of design decisions spanning decades.
- ZFS debuted at Sun in the mid-2000s with the idea that “filesystems are cheap” and should be managed like datasets, not partitions.
- Unlike traditional filesystems, ZFS encourages many mountpoints (one per dataset), which increases the surface area for mount ordering mistakes.
- The
legacymountpoint exists largely for compatibility with older Unix mount tooling and boot sequences where ZFS wasn’t the default filesystem manager. canmount=noautobecame popular in boot environments so multiple datasets can coexist without all mounting at once.- Mount overlays are older than ZFS by decades; they’re fundamental Unix VFS behavior. ZFS simply makes it easier to create overlay scenarios accidentally.
- Many “data loss” tickets in early ZFS deployments were visibility bugs: datasets existed, but were hidden by mistaken mounts or unimported pools.
- OpenZFS spread across Linux, illumos, and BSDs, and each platform integrated boot-time mounting differently—same properties, different choreography.
- Encryption added a new failure mode: datasets can be present and healthy but unmounted because keys weren’t loaded during boot.
The symptoms: “disappearing” datasets in the real world
In incident response, “dataset disappeared” usually arrives wrapped in one of these:
Symptom 1: The directory is empty (or looks freshly created)
You cd into /srv/app and it’s empty, but you know it should contain gigabytes. Or worse: it contains a different set of files created recently by an application that didn’t get the memo.
Symptom 2: The dataset exists in ZFS metadata
zfs list shows the dataset, with a non-zero used size, and maybe even the expected mountpoint. Yet the path doesn’t show the data.
Symptom 3: After reboot, the problem appears (or disappears)
If it changes with reboot, suspect boot ordering, legacy mounts, systemd units, or encryption key loading.
Symptom 4: Disk usage doesn’t match what you see
zfs list reports hundreds of gigabytes used, but du on the mountpoint shows almost nothing. Or the opposite: df reports a small root filesystem filling up because the app wrote to the underlying directory.
Symptom 5: Containers see something different than the host
Mount namespaces mean the host can have the dataset mounted while a container sees an unmounted directory (or a different bind mount). This is a special flavor of “it’s gone,” often blamed on ZFS because that’s the brand name people recognize.
Practical tasks: commands you can run today
These are production-friendly tasks I’ve actually used under pressure. Each includes what the output means and what decision it informs. Commands assume Linux with OpenZFS; adjust paths for your OS.
Task 1: List datasets and their mountpoints (the baseline reality)
cr0x@server:~$ zfs list -o name,used,avail,mountpoint,canmount -r tank
NAME USED AVAIL MOUNTPOINT CANMOUNT
tank 320G 1.4T /tank on
tank/services 210G 1.4T /srv on
tank/services/app 180G 1.4T /srv/app on
tank/services/db 30G 1.4T /srv/db on
Interpretation: If a dataset has a real mountpoint and canmount=on, ZFS intends it to be mounted. If it’s not visible, suspect overlays or a mount failure.
Task 2: See what ZFS thinks is currently mounted
cr0x@server:~$ zfs mount
tank
tank/services
tank/services/app
tank/services/db
Interpretation: If your dataset is missing here, it is not mounted by ZFS. If it’s listed but you still can’t see files at the path, suspect mount namespaces or that you’re looking at the wrong directory on the host.
Task 3: Confirm what the kernel has mounted at the path (catch overlays)
cr0x@server:~$ findmnt -T /srv/app
TARGET SOURCE FSTYPE OPTIONS
/srv/app tank/services/app zfs rw,xattr,noacl
Interpretation: This is the single most useful command for the mount trap. If findmnt shows something else (ext4, overlayfs, tmpfs), your ZFS dataset is being hidden.
Task 4: Show all mounts that could be stacking in the same tree
cr0x@server:~$ findmnt -R /srv
TARGET SOURCE FSTYPE OPTIONS
/srv tank/services zfs rw,xattr
└─/srv/app tank/services/app zfs rw,xattr
└─/srv/db tank/services/db zfs rw,xattr
Interpretation: If you see unexpected entries (like /srv/app being overlayfs), you’ve found the hiding place.
Task 5: Check whether the dataset is set to legacy (common “why didn’t it mount?” culprit)
cr0x@server:~$ zfs get -H -o name,property,value,source mountpoint tank/services/app
tank/services/app mountpoint legacy local
Interpretation: With legacy, ZFS will not auto-mount. If you don’t have a matching OS mount entry, your service will write to the underlying directory instead.
Task 6: Check canmount (the “it exists but won’t mount” switch)
cr0x@server:~$ zfs get -H -o name,property,value,source canmount tank/services/app
tank/services/app canmount noauto local
Interpretation: noauto means it won’t mount at boot via the usual automount path; you must mount it explicitly (or via tooling). off means you’re not mounting it at all unless you change it.
Task 7: Attempt a manual mount and read the error (don’t guess)
cr0x@server:~$ sudo zfs mount tank/services/app
cannot mount 'tank/services/app': mountpoint or dataset is busy
Interpretation: “Busy” often means /srv/app is already a mountpoint for something else. That’s the overlay scenario. Run findmnt -T /srv/app to identify the offender.
Task 8: Identify what’s occupying the mountpoint
cr0x@server:~$ findmnt -T /srv/app -o TARGET,SOURCE,FSTYPE,OPTIONS
TARGET SOURCE FSTYPE OPTIONS
/srv/app /dev/nvme0n1p2[/app] ext4 rw,relatime
Interpretation: You’re not looking at ZFS at all; ext4 is mounted there. Your dataset isn’t gone—your mountpoint is taken.
Task 9: Locate the dataset by scanning mountpoints (when you suspect it’s mounted elsewhere)
cr0x@server:~$ zfs list -o name,mountpoint -r tank | grep app
tank/services/app /srv/app
Interpretation: If the mountpoint is correct here but the kernel shows something else at that path, you have a mount collision.
Task 10: Detect underlying “shadow data” written when the dataset wasn’t mounted
cr0x@server:~$ sudo zfs unmount tank/services/app
cr0x@server:~$ ls -lah /srv/app | head
total 56K
drwxr-xr-x 14 root root 4.0K Dec 24 09:10 .
drwxr-xr-x 6 root root 4.0K Dec 24 09:09 ..
-rw-r--r-- 1 root root 2.1K Dec 24 09:10 app.log
drwxr-xr-x 2 root root 4.0K Dec 24 09:10 cache
Interpretation: Those files are on the underlying filesystem (often your root disk), created while ZFS wasn’t mounted. This is how root filesystems mysteriously fill up during a ZFS mount failure.
Task 11: Check pool import and mount services (boot-time failures)
cr0x@server:~$ systemctl status zfs-import-cache zfs-mount
● zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled)
Active: active (exited)
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled)
Active: failed (Result: exit-code)
Interpretation: Pool imported, but ZFS mounting failed. Don’t chase ghosts in the dataset; chase the service logs and the reason it failed.
Task 12: Read the mount failure reason from logs
cr0x@server:~$ journalctl -u zfs-mount -b --no-pager | tail -n 20
Dec 24 09:01:12 server zfs[1123]: cannot mount 'tank/services/app': directory is not empty
Dec 24 09:01:12 server systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Interpretation: “Directory is not empty” is usually a clue that something created files before ZFS mounted, or that another mount/bind mount occupies it. It can also mean you’re mounting over an existing non-empty directory and your system’s ZFS tooling refuses to do it automatically.
Task 13: Verify encryption keys are loaded (the invisible “not mounted”)
cr0x@server:~$ zfs get -H -o name,property,value keystatus,encryptionroot tank/services/app
tank/services/app keystatus unavailable
tank/services/app encryptionroot tank
Interpretation: If keystatus is unavailable, the dataset may exist but cannot be mounted. Load the key, then mount.
Task 14: Load key and mount (carefully)
cr0x@server:~$ sudo zfs load-key -a
cr0x@server:~$ sudo zfs mount -a
cr0x@server:~$ zfs get -H -o name,property,value keystatus tank/services/app
tank/services/app keystatus available
Interpretation: If this resolves it, your “disappearing dataset” was a key management/boot ordering issue, not a mountpoint property issue.
Task 15: Find properties that are inherited vs local (to spot surprise changes)
cr0x@server:~$ zfs get -r -o name,property,value,source mountpoint,canmount tank/services | sed -n '1,8p'
NAME PROPERTY VALUE SOURCE
tank/services mountpoint /srv local
tank/services canmount on default
tank/services/app mountpoint /srv/app inherited from tank/services
tank/services/app canmount on default
Interpretation: If a child is inheriting a mountpoint, a change to the parent changes the child’s effective mountpoint too. That’s power and danger in one line.
Task 16: Safely change a mountpoint (and verify)
cr0x@server:~$ sudo zfs set mountpoint=/srv/app2 tank/services/app
cr0x@server:~$ sudo zfs mount tank/services/app
cr0x@server:~$ findmnt -T /srv/app2
TARGET SOURCE FSTYPE OPTIONS
/srv/app2 tank/services/app zfs rw,xattr
Interpretation: ZFS makes mountpoint changes simple, but you must confirm the new path isn’t already used and update applications, fstab entries (if legacy), and monitoring.
Three corporate-world mini-stories
1) Incident caused by a wrong assumption: “The dataset is empty, so the deployment wiped it”
It started like many Friday problems: a deployment finished, the application restarted, and suddenly the web tier was serving default content—like the whole site had been reset. The on-call engineer did the sensible thing first: checked the data directory. It was empty. The conclusion came quickly and confidently: the deploy pipeline must have blown away the volume.
Someone kicked off a rollback. Someone else began drafting the incident update with phrases like “unrecoverable deletion” and “restoring from backups.” The storage person got pulled in and asked a calm question that derailed the narrative: “What does findmnt say about that path?”
Turns out the path had two mounts fighting for the same directory. The server had an old ext4 partition still listed in a system mount unit, and it mounted late in the boot sequence—right after ZFS had successfully mounted the dataset. The ext4 mount didn’t delete anything; it simply covered the ZFS mount. The data never moved. It was just hidden behind a filesystem from an older era of that host.
The fix was painfully simple: remove the stale mount unit and reboot. But the real correction was cultural: stop equating “directory looks empty” with “data is deleted.” That assumption causes fast, expensive, and unnecessary decisions, including rollbacks that introduce new risk.
2) An optimization that backfired: “Let’s use mountpoint=legacy for more control”
A platform team wanted deterministic mounts. They were tired of “magic” and aimed for explicitness: put ZFS datasets on mountpoint=legacy, then manage everything in /etc/fstab so the OS mount system decides ordering. On paper, it looked like good engineering: one place to read mounts, one way to mount everything, easy auditing.
In practice, it was an optimization for humans at the cost of reliability. During a maintenance window, they renamed a dataset and updated the ZFS side but missed the matching fstab line on two servers. Those two boots came up “fine,” but the dataset didn’t mount. The directory existed, so the application started and happily wrote to the underlying root filesystem. Within hours, root disks filled, journald started dropping logs, and the incident went from “why are metrics missing?” to “why are nodes dying?”
It got worse: when the dataset was eventually mounted correctly, the “missing” new files vanished—because those files were never in the dataset in the first place; they were on the underlying filesystem. Now the app had two sets of state created in two places, and reconciling them took careful human work and a lot of caution.
The postmortem outcome wasn’t “never use legacy.” It was: only use legacy when you can prove the mount is enforced (via systemd dependencies, early boot checks, and monitoring), and when your operational model accounts for the failure mode: application writes to the wrong disk.
3) A boring but correct practice that saved the day: “We monitor mount reality, not just ZFS properties”
At another company, a database cluster ran on ZFS datasets with carefully designed mountpoints. They’d had one too many scary mornings where the pool imported but a dataset didn’t mount due to key loading delays. So they implemented a dull check that ran on every node after boot and after configuration changes: verify that specific paths are backed by the expected filesystem source.
The check didn’t ask ZFS what it intended to do. It asked the kernel what it actually did: findmnt -T /var/lib/postgresql, and compare the source to the expected dataset. If it didn’t match, the node was marked unhealthy and removed from service before taking traffic. Nothing fancy. No AI. Just refusing to run stateful services on an ambiguous mount.
One day, a routine OS update introduced a new tmpfs mount for a legacy path used by an agent. It accidentally landed on a parent directory and shadowed a child mount. The node came back from reboot, and ZFS mounts looked “okay” at a glance, but the path check failed instantly. The node never joined the cluster. No data divergence, no emergency restore, no “why is root full?” firefight.
It was the kind of win nobody celebrates because nothing happened—which is the highest compliment you can pay to operational engineering.
Fast diagnosis playbook
This is the order I use when a dataset is “missing.” The goal is speed: reduce the search space in under five minutes, then decide whether you’re dealing with an overlay, a mount failure, or a visibility issue across namespaces/boot.
Step 1: Ask the kernel what’s mounted at the path
cr0x@server:~$ findmnt -T /srv/app -o TARGET,SOURCE,FSTYPE,OPTIONS
TARGET SOURCE FSTYPE OPTIONS
/srv/app tank/services/app zfs rw,xattr
If it’s not ZFS: you have an overlay or wrong mount. Identify and remove/adjust the mount that’s occupying the path.
Step 2: Ask ZFS whether it believes the dataset is mounted
cr0x@server:~$ zfs mount | grep -F tank/services/app || echo "not mounted"
not mounted
If it’s not mounted: check canmount, mountpoint, encryption keystatus, and logs. Don’t assume it’s a mountpoint typo until you’ve checked the boring stuff.
Step 3: Check the dataset’s mount properties and inheritance
cr0x@server:~$ zfs get -H -o property,value,source mountpoint,canmount tank/services/app
mountpoint /srv/app local
canmount on default
If mountpoint=legacy: go to OS mounts. If canmount=off/noauto: go to mount logic and boot scripts.
Step 4: Look for mount failures in logs (don’t hand-wave)
cr0x@server:~$ journalctl -u zfs-mount -b --no-pager | tail -n 50
Common hits: “directory is not empty,” “mountpoint busy,” key not available, permission issues in the mount directory, or missing parent mount.
Step 5: Check for shadow data and prevent divergence
If the dataset failed to mount, your app may have written to the underlying directory. Before remounting, stop the service, inspect underlying directories, and plan a controlled merge or cleanup. The worst move is “just mount it and hope.” That’s how you create split state.
Step 6: Only then worry about performance
People jump to “ZFS is slow” when the real issue is “you’re writing to the root disk.” Confirm the mount source first. Performance debugging on the wrong filesystem is a great way to become extremely busy and extremely wrong.
Common mistakes, symptoms, and fixes
Mistake 1: Mountpoint collision (two filesystems want the same directory)
Symptoms:
zfs mountfails with “dataset is busy”findmnt -T /pathshows ext4/tmpfs/overlayfs instead of ZFS- Data “reappears” after unmounting something unrelated
Fix: Identify the current mount occupying the path (findmnt). Remove or relocate the conflicting mount. If it’s a stale /etc/fstab entry or systemd mount unit, delete/disable it and reboot or remount cleanly.
Mistake 2: mountpoint=legacy with no working OS mount entry
Symptoms:
zfs listshows dataset, butzfs mountdoesn’t mount it- Applications write to underlying directories; root filesystem fills
Fix: Either (a) add correct OS mount configuration, or (b) change to a normal mountpoint and let ZFS manage it:
set mountpoint=/desired/path and ensure canmount=on. Then validate with findmnt.
Mistake 3: canmount=off or noauto without explicit mount automation
Symptoms:
- Dataset never mounts after reboot
- Manual
zfs mount datasetworks (fornoauto)
Fix: If you need it mounted at boot, set canmount=on. If you need noauto (boot environments, staging datasets), ensure a systemd unit mounts it before services start.
Mistake 4: Parent mountpoint changed and children inherited a new reality
Symptoms:
- Multiple datasets “moved” at once
- Mount attempts fail because new inherited paths collide
Fix: Review inheritance with zfs get -r mountpoint. Set explicit mountpoints on children that must remain stable, or redesign the dataset tree to match your directory layout.
Mistake 5: Encryption keys not loaded during boot
Symptoms:
- Dataset exists,
keystatus=unavailable - Mount fails until someone runs
zfs load-key
Fix: Implement key loading in boot sequence (with appropriate security controls), and add a post-boot mount verification to keep services from starting on empty directories.
Mistake 6: Containers and mount namespaces hide the host’s truth
Symptoms:
- Host sees dataset mounted; container sees empty directory
- Different content visible depending on where you run
ls
Fix: Validate mounts inside the container namespace, not just the host. Ensure bind mounts reference the correct host path and that the host path is backed by the intended ZFS dataset at the time the container starts.
Checklists / step-by-step plan
Checklist A: When data “disappears” from a mountpoint
- Run
findmnt -T /pathto see the real filesystem at that path. - Run
zfs mountandzfs list -o name,mountpoint,canmountfor the dataset. - If
zfs mountfails, read the error; do not improvise. - Check
zfs get mountpoint,canmountand whether values are inherited. - Check logs for mount service failures.
- Stop applications that write to the path until the mount is correct.
- If the dataset wasn’t mounted, look for shadow data in the underlying directory after unmounting/ensuring it’s unmounted.
- Only after visibility is correct, resume services and validate application state.
Checklist B: Safe mountpoint migration (production change)
- Pick a new path that is not currently a mountpoint: verify with
findmnt -T. - Stop services that write to the dataset.
- Confirm dataset is healthy and mounted where you think.
- Change mountpoint:
zfs set mountpoint=/new/path dataset. - Mount explicitly:
zfs mount datasetand verify withfindmnt. - Update application config, systemd units, container bind mounts, backups, and monitoring checks.
- Start services and validate writes land on ZFS (watch
zfs listused bytes, not just app logs). - Clean up old directories on the underlying filesystem if they accumulated shadow files.
Checklist C: Preventing the trap (boring guardrails that work)
- Monitor mount reality: for critical paths, alert if
findmnt -Tsource isn’t the expected dataset. - Make services depend on ZFS mounts (systemd ordering) so they don’t start on empty directories.
- Avoid mixing
legacyand non-legacy mount management unless you have a written reason and tests. - Document dataset tree and mountpoint inheritance rules for your team.
- During incident response, treat “empty directory” as “mount ambiguity” until proven otherwise.
FAQ
1) Can a ZFS dataset really “disappear”?
Rarely in the literal sense. Most of the time it’s either not mounted, mounted somewhere else, or hidden by another mount. ZFS metadata usually still shows it in zfs list.
2) What’s the fastest single command to confirm the mount trap?
findmnt -T /your/path. It tells you what the kernel actually has mounted there, which is the reality your applications live in.
3) Why does ZFS sometimes refuse to mount because “directory is not empty”?
Some ZFS mount tooling is conservative about mounting over non-empty directories to avoid hiding files that were written there (often by mistake). That error is a gift: it’s telling you shadow data exists or something else owns that path.
4) Is mountpoint=legacy bad?
No. It’s useful when you need OS-level mount control. It’s risky when teams forget that ZFS won’t mount it automatically and apps will write to the underlying directory if the OS mount doesn’t happen.
5) What’s the difference between canmount=off and canmount=noauto?
off means “never mount.” noauto means “don’t mount automatically, but allow manual mounting.” noauto is common in boot environments or staged datasets.
6) How do I tell if a child dataset inherited a mountpoint?
Use zfs get -o name,property,value,source mountpoint -r pool/dataset. If the source says “inherited from …” then changing the parent will change the child’s effective mountpoint.
7) Why did a reboot “fix” the missing dataset?
Because the boot sequence changed mount ordering or timing. Maybe ZFS mounted before the conflicting mount last time, or keys loaded successfully on the second boot, or a failed mount unit didn’t retry.
8) How do I prevent applications from writing to the wrong place when ZFS isn’t mounted?
Two layers: (1) systemd dependencies so services won’t start until the correct mount exists, and (2) monitoring that verifies the mount source (again: findmnt) and removes nodes from rotation if it’s wrong.
9) Can snapshots help with the mountpoint trap?
Snapshots protect data inside the dataset. They do not protect you from an application writing to the wrong underlying directory when the dataset isn’t mounted. You can snapshot the wrong filesystem too—successfully.
10) What if the dataset is mounted, but the directory still looks wrong?
Then you may be in a different mount namespace (containers), looking at a symlink that points elsewhere, or confusing two similar paths. Confirm with findmnt in the same context where the app runs.
Conclusion
The ZFS mountpoint trap isn’t a rare edge case. It’s a predictable outcome of a system that makes filesystems easy to create and easy to mount, living inside an OS that allows mounts to stack and hide each other without ceremony.
If you take only one habit from this: stop trusting directory listings when stakes are high. Trust mount reality. Ask the kernel what’s mounted at the path, ask ZFS what it thinks should be mounted, and make your services refuse to run when those answers disagree. That’s how you turn “my dataset disappeared” from an incident into a two-minute fix.