Boot-time storage failures don’t usually announce themselves with fireworks. They arrive as something quieter: a system that “boots” but can’t find /var, a login that works but services don’t, a host that mounts yesterday’s dataset because it looked more eager than today’s. If you run ZFS long enough, you’ll eventually meet the property that decides whether your datasets politely wait their turn—or stampede into your mount table at the worst possible moment.
That property is canmount. It’s small, easy to ignore, and it has saved more production systems than any of the flashier ZFS features. If you’ve ever had a dataset “mysteriously” mount over an existing directory, or a boot environment come up with the wrong root, this is where the story usually ends. Or begins, depending on how much coffee is left.
What canmount actually controls
canmount answers one question: “Is this dataset eligible to be mounted?” It does not pick the mountpoint; that’s mountpoint. It does not decide whether the system will try to mount it at boot; that’s a mix of canmount, mountpoint, and your OS’s ZFS mount service logic. But when things go sideways, the presence or absence of canmount is often the difference between a clean boot and a scavenger hunt through /mnt.
Here’s why this property matters operationally:
- ZFS datasets are mountable filesystems by default. That’s a feature until it’s a foot-gun.
- ZFS happily mounts datasets in dependency order, but it can still do something you didn’t intend—especially if you cloned, rolled back, renamed, or imported pools in a new context.
- “Boot environment” style layouts (common on illumos, FreeBSD, and some Linux setups) rely on
canmount=noautoin specific places to prevent the wrong root from self-mounting.
One operational truth: ZFS doesn’t know your change window. If you leave a dataset eligible to mount with a mountpoint that points at a critical path, ZFS will treat that as a perfectly reasonable request. It’s like giving your dog a steak and expecting it to wait for dinner time.
Interesting facts and historical context
Some short context points that make canmount feel less like a random knob and more like a scar tissue setting:
- ZFS was designed with the idea that “filesystems are cheap.” That’s why datasets proliferate—per-service, per-tenant, per-application—and why mount behavior needs guardrails.
- Early ZFS deployments learned the hard way that automatic mounting is great until you introduce cloning, snapshots, and alternate roots (
altroot) during recovery. - The notion of “boot environments” (multiple roots on the same pool) popularized the pattern of setting roots to
canmount=noautoso the system can choose which one becomes/. - Mountpoint inheritance is a core ZFS convenience feature, but it becomes dangerous when a child dataset inherits a mountpoint you didn’t intend to be active on that host.
- The ZFS mount logic has historically differed a bit across platforms (Solaris/illumos SMF, FreeBSD rc, Linux systemd units), but
canmountremains the portable “don’t mount this unless I say so” switch. - Because ZFS stores properties on-disk, a pool imported on a different machine brings its mount intentions with it. That’s a gift for consistency and a curse for surprise mounts.
- Clones and promoted datasets can retain mountpoint properties that made sense in their original environment, not the one you just moved them into.
- “Shadowed mountpoints” (a dataset mounting over a directory that already has content) have been a recurring source of quiet outages because the system appears healthy while reading the wrong files.
A practical mental model: mount eligibility vs mount location
If you take nothing else from this article, take this: ZFS mounting is a two-part decision.
1) Eligibility: “May I mount this dataset?”
That’s canmount. If canmount=off, the answer is “no, never.” If canmount=noauto, the answer is “not automatically.” If canmount=on, the answer is “yes.”
2) Location: “Where would it mount?”
That’s mountpoint. It might be a real path like /var, it might be inherited, it might be legacy (meaning “mount is handled by /etc/fstab or equivalent”), or it might be none.
In the real world, the disasters happen when eligibility and location accidentally line up with something important. For example:
- A clone of a root dataset is eligible (
canmount=on) and has a mountpoint of/. That’s not a filesystem; that’s a coup attempt. - A dataset intended for recovery work inherits
mountpoint=/varand is eligible to mount. Suddenly your “recovery” dataset becomes your production/var.
Joke #1: ZFS doesn’t “steal” your mountpoints. It just takes them for a long walk and comes back wearing your clothes.
The three values: on, off, noauto
canmount typically takes these values:
canmount=on
The dataset can be mounted, and will generally be mounted automatically if it has a valid mountpoint and your OS’s ZFS mount service is running. This is what you want for normal datasets like pool/home or pool/var.
canmount=off
The dataset cannot be mounted at all. You’ll still be able to snapshot it, replicate it, and access it through other mechanisms (like mounting snapshots elsewhere), but ZFS will not mount it as a filesystem. This is ideal for “container” datasets that exist only to hold children or to carry inherited properties.
Common pattern: a top-level dataset that exists only to hold children and set common properties:
pool/ROOTwithcanmount=off, children are boot environmentspool/containerswithcanmount=off, children are per-container datasets
canmount=noauto
The dataset can be mounted, but it won’t be mounted automatically. This is the boot-environment “don’t mount yourself” setting. It’s not “off,” it’s “hands off unless I asked.”
It’s especially valuable for datasets that should only be mounted in a specific context:
- A boot environment that the bootloader selects as root
- A dataset you mount temporarily for forensic work
- A dataset used for chroot maintenance, where auto-mounting could collide with the live root
Joke #2: canmount=noauto is the “I’m not antisocial, I’m just not coming to your meeting” setting.
Where boot-time surprises come from
Most boot-time ZFS surprises are not “ZFS is broken.” They’re “ZFS did what it was told, and we forgot what we told it last quarter.” The recurring causes:
Surprise #1: A dataset mounts over a directory that already has critical files
This is mountpoint shadowing. The underlying directory still exists, but you can’t see it because a filesystem mounted on top. If the directory had config files (say under /etc in a chroot scenario, or /var/lib for a database), you may boot and run—but with a different set of files than you expect.
Surprise #2: Imported pools bring their mountpoints with them
ZFS properties live with the pool. If you import a pool from another host, it may come in with mountpoints like /data, /var, or something terrifying like /. On a system with automatic mounting, that can turn “I’m just attaching a disk” into “why did SSH stop?”
Surprise #3: Boot environments multiply and one mounts when it shouldn’t
A clean boot environment design expects only the selected BE to mount as root. If old BEs are eligible and have mountpoints that are meaningful, they can mount too, sometimes under the same directory tree, causing conflicts and weird partial views of the filesystem.
Surprise #4: You “optimized” mounts and accidentally changed ordering
Mount ordering matters when services expect directories to be there before they start. A dataset mounting late can cause services to create directories on the underlying filesystem, then later ZFS mounts on top, hiding those newly created directories. The service continues happily, now writing to a different location than its startup created. That’s how you end up with “my logs are gone” that are actually “my logs are under the mountpoint, not in it.”
Three corporate-world mini-stories
Mini-story 1: An incident caused by a wrong assumption
At a mid-sized enterprise with a fairly standard ZFS-on-Linux setup, the storage team maintained a “utility pool” used for backups and one-off restores. The pool was normally imported only on a dedicated recovery host. One Friday, someone attached the backup shelves to a production app server for faster restores—temporarily, they said, with that particular confidence that only appears right before a ticket becomes an incident.
The assumption was simple: “Importing the pool won’t change anything unless we mount something.” But the pool’s datasets had mountpoints like /var/lib/postgresql and /srv from a previous life. Those datasets were still canmount=on, because why wouldn’t they be? The production host imported the pool, systemd’s ZFS mount units did their job, and suddenly the app server had a new /srv and a new /var/lib/postgresql—right on top of the old ones.
The app didn’t crash immediately. That was the cruel part. It kept running, reading some files from the newly mounted datasets while still holding open file descriptors to the old ones. Then the database restarted during log rotation (as databases do when you poke them indirectly), and it came up looking at the “wrong” data directory. From the outside, it looked like spontaneous data loss. From the inside, it was two datasets fighting over the same namespace.
The fix was boring: export the pool, set top-level datasets to canmount=off or mountpoint=none, and re-import with an altroot when doing restore work. The lesson stuck: importing is not a passive act when auto-mount is enabled; it’s a deployment event.
Mini-story 2: An optimization that backfired
A large company standardized on per-service datasets: pool/var, pool/var/log, pool/var/lib, and deeper. It made quotas and snapshots neat. It also made boot dependent on a long chain of mounts. During a performance push, someone tried to “simplify” by collapsing mountpoints and relying on inheritance to reduce property sprawl.
The change looked clean in review: set mountpoint=/var on a parent dataset, make a few children inherit. But one child dataset had been intentionally set to canmount=noauto because it was used for a rolling migration: it held staged data and was mounted only during cutover. In the new inheritance scheme, that child started inheriting mountpoint changes while keeping its canmount value—and someone later flipped it to canmount=on to “test something.”
Weeks later, during a reboot, the staged dataset mounted automatically and shadowed a directory under /var/lib that a service used for state. The service started clean, created new state on the underlying filesystem before the mount completed (timing matters), then ZFS mounted over it. The state “disappeared.” The service entered an initialization path and began re-seeding data from upstream systems, which caused load spikes and a cascade of throttling. It wasn’t a single failure; it was a self-inflicted distributed system test.
The postmortem conclusion was blunt: the optimization saved almost no operational effort but removed a critical safeguard. The remediation was to mark “container” datasets as canmount=off by policy, and to treat canmount changes as a high-risk change requiring a reboot test in staging. The backfire wasn’t ZFS complexity—it was forgetting that boot is the most timing-sensitive workflow you have.
Mini-story 3: A boring but correct practice that saved the day
Another organization ran ZFS pools on a fleet of build servers. These hosts were constantly reimaged, but the pools sometimes moved between machines when hardware got repurposed. The SRE team had a practice that nobody celebrated: every pool import for non-root pools used altroot, and every top-level dataset used to organize sub-datasets had canmount=off and mountpoint=none.
One day a host failed and its pool was attached to a different server to recover build artifacts. The pool contained datasets with mountpoints that, on their original machine, pointed under /srv/build. On the recovery host, /srv already existed and was used for something else. Without guardrails, this would have been the classic “why did my service reconfigure itself” outage.
Instead, the import looked like this: zpool import -o altroot=/mnt/recovery poolname. All mountpoints were relocated under /mnt/recovery. The datasets that were not meant to mount stayed ineligible. The team mounted only what they needed, copied out artifacts, and exported the pool. No services blinked.
That practice never made a slide deck. It did, however, prevent a recovery action from becoming an incident. In production operations, “boring” is not an insult; it’s a KPI.
Practical tasks (commands + interpretation)
These are tasks you can run today on a ZFS system. The commands are written for typical OpenZFS CLI behavior. Adjust pool/dataset names for your environment.
Task 1: List datasets with mount behavior in one view
cr0x@server:~$ zfs list -o name,canmount,mountpoint,mounted -r tank
NAME CANMOUNT MOUNTPOINT MOUNTED
tank on /tank yes
tank/ROOT off none no
tank/ROOT/default noauto / yes
tank/var on /var yes
tank/var/log on /var/log yes
Interpretation: You’re looking for datasets that are canmount=on (or accidentally noauto) with mountpoints that collide with critical paths. tank/ROOT being off and none is a healthy “container dataset” pattern.
Task 2: Find “eligible to mount” datasets that are currently not mounted
cr0x@server:~$ zfs list -H -o name,canmount,mountpoint,mounted -r tank | awk '$2=="on" && $4=="no" {print}'
tank/tmp on /tmp no
Interpretation: An eligible dataset not mounted can be fine (maybe it’s new or intentionally unmounted), but it’s also a boot-time surprise waiting to happen if its mountpoint is sensitive and the mount service changes behavior.
Task 3: Identify datasets with mountpoint=legacy
cr0x@server:~$ zfs list -H -o name,mountpoint -r tank | awk '$2=="legacy" {print}'
tank/oldroot legacy
Interpretation: legacy means your OS will rely on /etc/fstab (or equivalent) for mounting. Mixing legacy mounts with ZFS auto-mount is sometimes necessary, but it increases the number of boot pathways you must reason about.
Task 4: Check property inheritance (the “why did this happen” tool)
cr0x@server:~$ zfs get -r -o name,property,value,source canmount,mountpoint tank/var
NAME PROPERTY VALUE SOURCE
tank/var canmount on local
tank/var mountpoint /var local
tank/var/log canmount on inherited from tank/var
tank/var/log mountpoint /var/log local
Interpretation: The source field tells you whether you’re dealing with a local override or inheritance. Most “it mounted somewhere weird” mysteries are solved by noticing a child inherited a property you assumed was local elsewhere.
Task 5: Make a container dataset non-mountable (safe default)
cr0x@server:~$ sudo zfs set canmount=off tank/containers
cr0x@server:~$ sudo zfs set mountpoint=none tank/containers
cr0x@server:~$ zfs get -o name,property,value tank/containers canmount,mountpoint
NAME PROPERTY VALUE SOURCE
tank/containers canmount off local
tank/containers mountpoint none local
Interpretation: This prevents accidental mounting of the parent while still allowing children like tank/containers/app1 to mount where they should.
Task 6: Mark a boot environment dataset as “mount only when selected”
cr0x@server:~$ sudo zfs set canmount=noauto tank/ROOT/be-2025q4
cr0x@server:~$ zfs get -o name,property,value tank/ROOT/be-2025q4 canmount
NAME PROPERTY VALUE SOURCE
tank/ROOT/be-2025q4 canmount noauto local
Interpretation: A boot environment should not generally self-mount as a regular filesystem after import; it should mount as root only when chosen by the boot process. noauto is the guardrail.
Task 7: Temporarily mount a noauto dataset for inspection
cr0x@server:~$ sudo zfs mount tank/ROOT/be-2025q4
cr0x@server:~$ zfs list -o name,mountpoint,mounted tank/ROOT/be-2025q4
NAME MOUNTPOINT MOUNTED
tank/ROOT/be-2025q4 / yes
Interpretation: If the dataset has mountpoint=/, mounting it on a live system is dangerous unless you also use an alternate root (see the next task). In practice, for BE datasets, you typically mount them under an alternate root path, not on /.
Task 8: Import a pool safely using altroot (recovery best practice)
cr0x@server:~$ sudo zpool export backup
cr0x@server:~$ sudo zpool import -o altroot=/mnt/recovery backup
cr0x@server:~$ zfs mount | head
backup/ROOT/be-old /mnt/recovery/
Interpretation: altroot prepends a safe prefix to mountpoints, so a dataset whose mountpoint is /var becomes /mnt/recovery/var. This is how you keep recovery work from stepping on production paths.
Task 9: Unmount a dataset that mounted somewhere it shouldn’t
cr0x@server:~$ zfs list -o name,mountpoint,mounted | grep '/var/lib/postgresql'
backup/pgdata /var/lib/postgresql yes
cr0x@server:~$ sudo zfs unmount backup/pgdata
cr0x@server:~$ zfs list -o name,mountpoint,mounted backup/pgdata
NAME MOUNTPOINT MOUNTED
backup/pgdata /var/lib/postgresql no
Interpretation: Unmounting stops the immediate bleeding, but it doesn’t prevent a remount on reboot. For that you need to fix canmount and/or mountpoint.
Task 10: Prevent remounting by setting canmount=off
cr0x@server:~$ sudo zfs set canmount=off backup/pgdata
cr0x@server:~$ zfs get -o name,property,value backup/pgdata canmount
NAME PROPERTY VALUE SOURCE
backup/pgdata canmount off local
Interpretation: This makes the dataset ineligible to mount. If you still need access to its contents, you can temporarily change it back or use a different recovery technique, but you’ve made the default safe.
Task 11: Fix a dangerous mountpoint without changing eligibility
cr0x@server:~$ sudo zfs set mountpoint=/mnt/pgdata-staging backup/pgdata
cr0x@server:~$ sudo zfs set canmount=on backup/pgdata
cr0x@server:~$ sudo zfs mount backup/pgdata
cr0x@server:~$ zfs list -o name,mountpoint,mounted backup/pgdata
NAME MOUNTPOINT MOUNTED
backup/pgdata /mnt/pgdata-staging yes
Interpretation: Sometimes you want it mountable, just not on top of a production directory. Moving the mountpoint is usually safer than relying on “nobody will import this pool here.”
Task 12: Detect shadowing risk (datasets mounted under critical trees)
cr0x@server:~$ zfs list -H -o name,mountpoint,canmount -r tank | awk '$2 ~ "^/(etc|var|usr|srv|home)($|/)" {print}'
tank/var /var on
tank/var/log /var/log on
tank/home /home on
Interpretation: This isn’t “bad.” It’s an inventory of datasets that live under critical trees. Any unexpected entry here is worth investigating before the next reboot.
Task 13: Verify what ZFS thinks is currently mounted
cr0x@server:~$ zfs mount | sed -n '1,8p'
tank /tank
tank/var /var
tank/var/log /var/log
tank/home /home
Interpretation: This is ZFS’s view, not necessarily the whole system’s mount table. Still, it’s the authoritative list of ZFS-mounted filesystems.
Task 14: Cross-check against the kernel mount table
cr0x@server:~$ mount | grep ' type zfs ' | head
tank/var on /var type zfs (rw,xattr,noacl)
tank/var/log on /var/log type zfs (rw,xattr,noacl)
Interpretation: If zfs mount and mount disagree, you’re in edge-case territory: maybe a failed mount, maybe a legacy mount, maybe a namespace issue in containers.
Task 15: Find datasets that will try to mount but have mountpoint=none
cr0x@server:~$ zfs list -H -o name,canmount,mountpoint -r tank | awk '$2=="on" && $3=="none" {print}'
tank/ROOT on none
Interpretation: This is usually a configuration smell. If a dataset can mount but has nowhere to go, it’s probably a container dataset that should be canmount=off.
Task 16: Audit recent changes by comparing local vs inherited properties
cr0x@server:~$ zfs get -r -o name,property,value,source canmount,mountpoint tank | awk '$4=="local" {print}' | head
tank mountpoint /tank local
tank/ROOT canmount off local
tank/ROOT mountpoint none local
tank/var mountpoint /var local
Interpretation: “Local” properties are where surprises tend to be introduced. This gives you a short list of places humans have touched.
Fast diagnosis playbook
This is the “it’s 03:12 and the host didn’t come back cleanly” workflow. It assumes you suspect ZFS mounting issues. The goal is to find the bottleneck fast: is it an import problem, a mount eligibility problem, a mountpoint conflict, or a service ordering issue?
First: confirm pool import and basic health
cr0x@server:~$ sudo zpool status
pool: tank
state: ONLINE
scan: scrub repaired 0B in 00:10:12 with 0 errors on Sun Dec 22 02:00:03 2025
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
sda3 ONLINE 0 0 0
Interpretation: If the pool isn’t imported or is degraded/faulted, stop chasing mountpoints. Fix import/device issues first. Mount surprises are downstream of “pool is usable.”
Second: look for obvious mount collisions and eligibility traps
cr0x@server:~$ zfs list -o name,canmount,mountpoint,mounted -r tank | egrep ' /(var|etc|usr|srv|home|)($|/)'
Interpretation: You’re scanning for unexpected datasets with critical mountpoints. Pay attention to anything mounting on /, /var, or /usr that you didn’t explicitly design.
Third: verify what is mounted and what is missing
cr0x@server:~$ zfs mount
cr0x@server:~$ mount | grep ' type zfs '
Interpretation: If expected datasets are missing, check canmount and mountpoint. If unexpected datasets are present, check for imported foreign pools or mistakenly eligible boot environments.
Fourth: inspect inheritance and property source for the weird dataset
cr0x@server:~$ zfs get -o name,property,value,source canmount,mountpoint tank/suspect
NAME PROPERTY VALUE SOURCE
tank/suspect canmount on inherited from tank
tank/suspect mountpoint /var local
Interpretation: A local mountpoint with inherited eligibility is a classic pattern for surprises: someone changed where it mounts, but not whether it should be eligible.
Fifth: if this is a recovery import, stop and use altroot
If you imported a non-native pool and it started mounting into production paths, the fix is usually to export and re-import with an alternate root.
cr0x@server:~$ sudo zpool export backup
cr0x@server:~$ sudo zpool import -o altroot=/mnt/recovery backup
Interpretation: This converts “surprise mount” into “all mounts are fenced under /mnt/recovery,” which is the correct posture during incident response.
Common mistakes: symptoms and fixes
Mistake 1: Leaving container datasets as canmount=on
Symptom: A parent dataset like tank/ROOT shows up mounted somewhere, or attempts to mount and generates confusing warnings.
Why it happens: Someone created it and never flipped the defaults. It inherits a mountpoint, or a mountpoint was set during experimentation.
Fix: Make it explicitly non-mountable and give it no mountpoint.
cr0x@server:~$ sudo zfs set canmount=off tank/ROOT
cr0x@server:~$ sudo zfs set mountpoint=none tank/ROOT
Mistake 2: Confusing canmount=noauto with “disabled”
Symptom: A dataset unexpectedly mounts after a manual zfs mount -a or automation runs, or an operator mounts it without realizing it’s a BE/root-like dataset.
Why it happens: noauto doesn’t prevent mounting; it prevents automatic mounting. Humans and scripts can still mount it.
Fix: If you truly never want it mounted, use canmount=off. If you want it mountable only in fenced contexts, combine noauto with altroot imports and careful tooling.
cr0x@server:~$ sudo zfs set canmount=off tank/archive/never-mount
Mistake 3: Importing a foreign pool without altroot
Symptom: After importing a pool, services behave strangely, data directories “change,” or paths like /srv suddenly contain different content.
Why it happens: The pool’s datasets mount where they were configured to mount, and your host honors that.
Fix: Export immediately, then re-import with altroot. Optionally set canmount=off or mountpoint=none on top-level datasets before future moves.
cr0x@server:~$ sudo zpool export foreignpool
cr0x@server:~$ sudo zpool import -o altroot=/mnt/foreign foreignpool
Mistake 4: Setting mountpoint=/ on multiple datasets (boot environment chaos)
Symptom: Boot drops to emergency shell, or root filesystem is not the one you expected. Post-boot, you see multiple BE-like datasets that could plausibly mount as root.
Why it happens: Cloning or renaming BEs without updating mount properties and eligibility.
Fix: Ensure only the selected root is mounted as / at boot. In many BE schemes, non-selected roots should be canmount=noauto. Container ROOT should be off.
cr0x@server:~$ zfs list -o name,canmount,mountpoint -r tank/ROOT
cr0x@server:~$ sudo zfs set canmount=noauto tank/ROOT/oldbe
Mistake 5: Relying on “mounted=no” as proof it won’t mount at next boot
Symptom: Everything looks fine now, but after a reboot the dataset mounts again and repeats the issue.
Why it happens: mounted is state, not policy. canmount and mountpoint are policy.
Fix: Set the policy explicitly.
cr0x@server:~$ sudo zfs set canmount=off pool/surprise
cr0x@server:~$ sudo zfs set mountpoint=none pool/surprise
Mistake 6: Mixing legacy mounts and ZFS auto-mount without a clear contract
Symptom: Some datasets mount twice, or mounts fail intermittently at boot, or you see contradictory state between zfs mount and mount.
Why it happens: You’ve split responsibility between ZFS and the OS mount system, but nobody wrote down which dataset is managed where.
Fix: Either move those datasets back to ZFS-managed mountpoints or fully commit to legacy mounts for them and ensure canmount aligns with that choice. At minimum, audit mountpoint=legacy datasets and keep them intentional.
Checklists / step-by-step plan
Checklist A: Designing a dataset hierarchy that won’t surprise you at boot
- Create container datasets for grouping and property inheritance (compression, atime, recordsize).
- Immediately set container datasets to
canmount=offandmountpoint=none. - Set explicit mountpoints only on datasets that should mount in normal operation.
- Use
canmount=noautofor boot environments or datasets that should only be mounted by explicit action. - Avoid ambiguous mountpoints (especially
/,/var,/usr) except where intentionally designed. - Document one invariant: “Only these datasets may mount under critical paths,” and enforce it with periodic audits.
Checklist B: Before importing a pool from another system
- Decide the safe staging directory (e.g.,
/mnt/recovery). - Import with
altroot. - List mountpoints and canmount values before mounting anything explicitly.
- If you must make the pool portable, set top-level datasets to
canmount=offandmountpoint=nonebefore exporting.
Checklist C: Before rebooting after storage changes
- Run
zfs list -o name,canmount,mountpoint,mounted -r pooland scan for critical paths. - Confirm no unexpected dataset has
mountpoint=/. - Confirm container datasets are not mountable.
- If you changed inheritance, verify with
zfs get ... sourceso you know what will propagate. - If possible, do a controlled reboot in staging first. Boot-time ordering issues rarely show up in a running system.
FAQ
1) What’s the difference between canmount=off and mountpoint=none?
canmount=off makes the dataset ineligible to mount. mountpoint=none means “it has no mount location.” In production, container datasets often get both: they shouldn’t mount, and they shouldn’t even have a plausible target.
2) If I set canmount=noauto, can the dataset still mount at boot?
In general, noauto prevents ZFS from mounting it automatically. But boot environments are special: the boot process can mount a noauto dataset as root because it’s explicitly selected. Also, an admin or script can still mount it manually.
3) Why did a dataset mount after I imported a pool? I didn’t run zfs mount.
On many systems, pool import triggers ZFS mount services that mount eligible datasets with valid mountpoints. Import is not “just attach storage”; it’s “activate the pool’s filesystem intentions.” Use zpool import -o altroot=... for foreign pools.
4) Can I use canmount=off on a dataset that has children?
Yes. Children can still mount normally. This is a best practice for parent/container datasets like pool/ROOT or pool/containers.
5) How do I tell if a surprising mount is due to inheritance?
Use zfs get ... source. If canmount or mountpoint shows “inherited from …” you’ve got an inheritance story, not a random behavior.
6) Is canmount the same as setting readonly=on?
No. readonly=on still mounts the dataset, just read-only. canmount=off prevents mounting entirely. In recovery scenarios, you might use both: keep datasets mountable but read-only, or keep them non-mountable until you’re ready.
7) What’s the safest way to inspect a boot environment dataset that has mountpoint=/?
Import the pool with altroot (or mount the dataset under a safe directory by temporarily changing its mountpoint). The key is: don’t mount a “root” dataset onto / of a running system unless you enjoy living dangerously.
8) Should I set canmount=noauto for all datasets and mount them via systemd/fstab instead?
You can, but you’re trading one source of truth for two. ZFS-managed mounts are usually simpler if you commit to them. Reserve noauto for special cases: boot environments, staging datasets, and maintenance workflows.
9) Why do I see datasets with canmount=on but mounted=no?
Eligibility doesn’t guarantee it’s currently mounted. The mount could have failed, the mountpoint may not exist, the pool might have been imported with -N (don’t mount), or the OS mount service didn’t run yet. Treat it as a clue, not a verdict.
Conclusion
canmount is not glamorous. It doesn’t compress data, it doesn’t speed up reads, and it won’t impress anyone in a product demo. What it does is prevent the kind of boot-time and import-time surprises that burn entire mornings and leave you with the uneasy feeling that filesystems are sentient.
If you run ZFS in production, make this a habit: container datasets are canmount=off; special-purpose datasets are canmount=noauto; everything else is explicitly designed to be on with a mountpoint that you can defend in a postmortem. The best storage incidents are the ones you never have to name.