The most expensive ZFS incidents rarely start with disk failures. They start with confidence.
Someone tweaks one dataset “just to fix this one thing,” and a week later half the fleet is
slower, backup jobs explode, or a mount goes missing at boot. Nothing looks “broken” in the
obvious ways. ZFS is still healthy. The pool is ONLINE. And yet the system behaves differently.
The culprit is often ZFS property inheritance: the feature you love when you’re scaling
cleanly—and the feature that makes a child dataset silently change when you didn’t mean it.
If you run production, treat inheritance like routing tables: powerful, global in impact,
and not something you “just try.”
Inheritance is the feature and the trap
ZFS datasets (filesystems and volumes) live in a tree. Properties live on nodes in that tree.
Some properties are inherited down the tree unless overridden. That’s intentional: it lets you
set a policy at a parent dataset and have every child follow it.
The trap is human: we think in “this dataset” and ZFS thinks in “this subtree.” You set a
property on tank/apps to solve an app issue, and you just changed the behavior of
tank/apps/postgres, tank/apps/redis, and the random forgotten
tank/apps/tmp that a CI runner has been hammering for two years.
Inheritance isn’t just about “nice defaults.” It can change mount behavior, performance, snapshot visibility,
access semantics, encryption key handling, and space guarantees. That means inheritance can:
- fix things quickly when used deliberately,
- cause “no one touched that dataset” incidents when used casually,
- create configuration drift that resists debugging (“it’s inherited from where?”).
One operational rule I like: if you can’t name the parent datasets that will be impacted by your change,
you’re not ready to run zfs set.
Joke #1: ZFS inheritance is like office Wi‑Fi—everyone benefits until someone “optimizes” it and the whole floor
starts buffering.
The only mental model that scales: local vs inherited vs default
When you query a property, ZFS can report the value and the source. The source is the part that saves you:
- local: explicitly set on that dataset
- inherited from <dataset>: inherited from an ancestor
- default: compiled-in default for your ZFS implementation/version
- temporary: set at runtime (less common; think mount options in some stacks)
Here’s why this matters in production: two datasets can show the same value but behave differently
operationally because the source tells you what will happen next change window.
A default value stays stable until you change it. An inherited value can change when someone tweaks the parent.
A local value is “sticky” but can become stale policy-wise.
Inheritance rules that matter in real life
- Most properties are inheritable; not all. ZFS has properties that are read-only or
inherently local (for example,used,available). - Children inherit unless overridden locally. If a child has a local value, parent changes
won’t affect it. zfs inheritremoves the local override. It doesn’t “set to default.” It removes
the local setting and the value becomes inherited (or default, if nothing to inherit).- Clones complicate the story. Clones start life as a child of an origin snapshot; they still
exist in the dataset tree and thus inherit from parents like anything else. - Mount-related properties can fail in ways that look like systemd issues. They’re not. They’re
ZFS, politely refusing to mount what you told it to mount.
A single quote you can run your storage team on
W. Edwards Deming (paraphrased idea): “A system is perfectly designed to get the results it gets.”
Inheritance surprises are your system working exactly as designed.
Properties that bite hard (and why)
1) mountpoint, canmount, and readonly: the “where did my filesystem go?” trio
If you run servers, you’ve debugged “it mounted yesterday” at least once. ZFS mount behavior is property-driven.
A parent dataset with canmount=off can be a clean administrative boundary—until it’s set on the wrong
node and children stop mounting at boot.
mountpoint is inherited by default, which is convenient and dangerous. If you set a parent
mountpoint to /srv, you might expect only the parent to move. Instead, children mount
under it unless they have local mountpoints. Great when planned; chaos when not.
2) compression: small toggle, huge blast radius
Compression is inheritable. That’s a feature: you can turn on compression=lz4 at a parent and
benefit widely. The surprise: you can also turn it off for a subtree and quietly balloon capacity.
Worse, you can change it for a dataset that includes both “hot database” and “cold logs” and now your tuning is
a compromise you didn’t mean to make.
3) recordsize: performance tuning that becomes performance regression
recordsize is inheritable on filesystems. Set it wrong at a parent, and your children inherit a
record size that may be awful for their workload. A PostgreSQL data directory typically prefers smaller
blocks for random IO; a media archive likes big blocks. Your dataset tree shouldn’t force them into the same
clothing size.
4) atime: death by a million tiny writes
Access time updates (atime=on) can generate a write on reads. On busy read-heavy datasets
(mail spools, web caches, metadata-heavy trees), inherited atime=on can turn “reads are cheap”
into “reads are also writes,” which affects latency and wear.
5) snapdir and snapshot visibility
snapdir=visible makes .zfs/snapshot show up. That’s sometimes great for self-service restores,
and sometimes a compliance or operational hazard when applications traverse it. Inheritance means you can expose
snapshots in places you didn’t intend. Or hide them where you relied on them operationally.
6) Quotas, reservations, refreservations: “space” is policy
Quotas and reservations can be inherited depending on the property type and your design. Even when not inherited,
their interaction with parent/child allocation is where the surprise lives. Space management is not intuitive under pressure.
You can end up with free space at the pool level but “no space left” for a dataset because of reservations, or with a
child dataset that grows unbounded because you assumed a quota at the parent would cover it.
7) Encryption properties: inheritance with sharp edges
Native ZFS encryption introduces inheritable properties like encryption, keylocation,
keyformat, and more. The core behavior: children can inherit encryption settings from parents when created.
But changing encryption isn’t like changing compression; you can’t just flip it on for an existing dataset without
a send/receive migration. People still try. The “surprise” here is often a process failure: expecting inheritance to
retroactively secure data.
Joke #2: Changing recordsize on the parent dataset is like buying everyone in the company the same chair—someone’s back is filing a ticket.
Interesting facts and historical context
- ZFS was designed to replace a stack, not a filesystem. It merged volume management and filesystem semantics, so “properties” became system policy knobs.
- Properties were built for delegation. Early ZFS deployments needed a way to let teams self-manage datasets safely; inheritance was the scaling mechanism.
- Compression default choices evolved. Many modern stacks recommend
lz4broadly; older guidance was more cautious because CPU was scarcer and algorithms differed. recordsizecame from a world of big sequential IO. ZFS optimized for streaming workloads; databases forced the community to get disciplined about per-dataset tuning.atimedefaults reflect Unix tradition, not performance reality. Access times were useful for tools and policies, but modern systems often pay too much for them.- Mount behavior used to be more “magical.” Different OS integrations (Solaris heritage vs Linux ZFS) shaped how properties interact with boot and mount services.
- Snapshot visibility is a cultural war. Admins love
.zfs/snapshotfor restores; app owners hate surprise directories. The property exists because both sides are right. - Property sources were a debugging gift added early. “local vs inherited vs default” is one of the best operator-facing UX decisions in storage tooling.
Practical tasks (commands, outputs, decisions)
These are the checks I run when someone says “ZFS changed something.” Each task includes: the command,
what the output means, and the decision you make next.
Task 1: See the property value and where it comes from
cr0x@server:~$ zfs get -o name,property,value,source compression tank/apps/postgres
NAME PROPERTY VALUE SOURCE
tank/apps/postgres compression lz4 inherited from tank/apps
Meaning: Postgres dataset is compressed because its parent is. If you change tank/apps,
you change Postgres. Decision: if Postgres needs special behavior, set a local override on
tank/apps/postgres and stop relying on the parent for that property.
Task 2: Audit a subtree for a property (catch drift fast)
cr0x@server:~$ zfs get -r -o name,property,value,source recordsize tank/apps
NAME PROPERTY VALUE SOURCE
tank/apps recordsize 128K local
tank/apps/postgres recordsize 128K inherited from tank/apps
tank/apps/redis recordsize 128K inherited from tank/apps
tank/apps/artifacts recordsize 128K inherited from tank/apps
Meaning: One local setting at tank/apps dictates the entire subtree.
Decision: either accept that as policy, or break out special datasets with local record sizes
(e.g., Postgres to 16K or 8K depending on your storage and workload).
Task 3: Find everything that inherits from a specific parent (blast radius)
cr0x@server:~$ zfs get -r -H -o name,source mountpoint tank/apps | awk '$2 ~ /tank\/apps/ {print $1, $2}'
tank/apps/postgres inherited from tank/apps
tank/apps/redis inherited from tank/apps
tank/apps/artifacts inherited from tank/apps
Meaning: Those datasets will move if you change tank/apps mountpoint.
Decision: do not touch mountpoint at the parent unless you have a maintenance window
and a rollback plan.
Task 4: Verify what will mount and what won’t
cr0x@server:~$ zfs get -r -o name,canmount,mountpoint,mounted tank/apps
NAME CANMOUNT MOUNTPOINT MOUNTED
tank/apps on /srv/apps yes
tank/apps/postgres on /srv/apps/postgres yes
tank/apps/redis on /srv/apps/redis yes
Meaning: All datasets are eligible to mount and are mounted.
Decision: if something is missing, look for canmount=off or mountpoint=none
and trace where it’s inherited from.
Task 5: Catch “we changed the parent mountpoint” before reboot does
cr0x@server:~$ zfs set mountpoint=/srv tank/apps
cr0x@server:~$ zfs mount -a
cannot mount 'tank/apps/postgres': mountpoint or dataset is busy
Meaning: The new mount layout conflicts with existing mounts or running processes.
Decision: revert the parent mountpoint immediately, or stop services cleanly and remount
in a controlled window. Do not “force it” mid-flight.
Task 6: Identify local overrides (the places policy won’t reach)
cr0x@server:~$ zfs get -r -H -o name,property,value,source compression tank | awk '$4=="local"{print}'
tank/apps compression lz4 local
tank/backups compression off local
Meaning: Only two datasets have explicit compression settings; everything else is inherited or default.
Decision: confirm that these local overrides are intentional. Local overrides are where “standards”
silently stop applying.
Task 7: Reset a child dataset to inherit again (controlled un-customization)
cr0x@server:~$ zfs get -o name,property,value,source atime tank/apps/postgres
NAME PROPERTY VALUE SOURCE
tank/apps/postgres atime off local
cr0x@server:~$ zfs inherit atime tank/apps/postgres
cr0x@server:~$ zfs get -o name,property,value,source atime tank/apps/postgres
NAME PROPERTY VALUE SOURCE
tank/apps/postgres atime on inherited from tank/apps
Meaning: You removed the local override, and now Postgres follows the parent.
Decision: do this only if you’re sure the parent policy is correct for the workload. “Inherit”
is not a free cleanup button.
Task 8: See what properties are set locally on a dataset (quick profile)
cr0x@server:~$ zfs get -s local all tank/apps/postgres | head
NAME PROPERTY VALUE SOURCE
tank/apps/postgres atime off local
tank/apps/postgres logbias latency local
tank/apps/postgres primarycache metadata local
tank/apps/postgres recordsize 16K local
Meaning: These are deliberate deviations from inherited/default policy.
Decision: if you didn’t expect these, you have config drift. Track down who set them and why.
If you did expect them, document them because someone will “clean them up” later.
Task 9: Use zfs diff to understand snapshot-visible changes (ops reality check)
cr0x@server:~$ zfs diff tank/apps/postgres@before-change tank/apps/postgres@after-change | head
M /srv/apps/postgres/postgresql.conf
+ /srv/apps/postgres/pg_wal/0000000100000000000000A1
- /srv/apps/postgres/tmp/old.sock
Meaning: This is file-level change tracking between snapshots, not property changes.
Decision: if performance changed but data didn’t, you’re likely looking at property inheritance or
kernel/module behavior—not “someone edited files.”
Task 10: Confirm dataset layout and spot children that shouldn’t exist
cr0x@server:~$ zfs list -r -o name,used,avail,refer,mountpoint tank/apps
NAME USED AVAIL REFER MOUNTPOINT
tank/apps 120G 2.1T 256K /srv/apps
tank/apps/postgres 80G 2.1T 80G /srv/apps/postgres
tank/apps/redis 2G 2.1T 2G /srv/apps/redis
tank/apps/tmp 35G 2.1T 35G /srv/apps/tmp
Meaning: A heavy tmp dataset inside the apps subtree is a classic inheritance hazard.
Decision: move ephemeral workloads to their own parent policy domain (or set local overrides),
especially for recordsize, sync, and atime.
Task 11: Check whether quotas or reservations are quietly constraining children
cr0x@server:~$ zfs get -r -o name,quota,reservation,refreservation,used,available tank/apps/postgres
NAME QUOTA RESERVATION REFRESERVATION USED AVAILABLE
tank/apps/postgres none 100G none 80G 20G
Meaning: You reserved 100G; the dataset has only 20G “available” even if the pool has more.
Decision: if Postgres is nearing 100G and you didn’t intend hard space pressure, adjust or remove
the reservation. If you did intend it, alert on it—reservations are policy, not trivia.
Task 12: Validate compression and logical space (is it helping or hurting?)
cr0x@server:~$ zfs get -o name,compressratio,used,logicalused compression,compressratio tank/apps/artifacts
NAME PROPERTY VALUE SOURCE
tank/apps/artifacts compression lz4 inherited from tank/apps
tank/apps/artifacts compressratio 1.72x -
Meaning: Compression is on and effective. If compressratio is ~1.00x, you’re spending
CPU for little benefit. Decision: keep compression for most mixed workloads; turn it off only with
evidence, not vibes.
Task 13: Spot mountpoint collisions (a common inheritance failure mode)
cr0x@server:~$ zfs get -r -H -o name,value mountpoint tank | sort -k2 | awk 'prev==$2{print "collision:", prev, "between", prevname, "and", $1} {prev=$2; prevname=$1}'
collision: /srv/apps between tank/apps and tank/apps/legacy
Meaning: Two datasets share the same mountpoint. ZFS will not happily mount both.
Decision: fix mountpoints before reboot day. Collisions often show up as “random missing mounts”
depending on mount order.
Task 14: Confirm which properties are inheritable (know what game you’re playing)
cr0x@server:~$ zfs get -H -o property,values,source all tank/apps | head -n 8
type filesystem -
creation Wed Nov 13 10:12 2024 -
used 120G -
available 2.1T -
referenced 256K -
compressratio 1.23x -
mounted yes -
quota none default
Meaning: Some properties show a meaningful source; others are dynamic and don’t.
Decision: focus inheritance audits on behavioral properties (compression, recordsize, atime,
mountpoint, canmount, snapdir, sync, logbias, primarycache/secondarycache, xattr, acltype, encryption-related).
Fast diagnosis playbook
This is the “it’s slow / it’s not mounting / it’s out of space” triage flow that gets you to the inheritance
culprit quickly. Run it like a checklist. Speed matters; correctness matters more.
First: define the scope (one dataset or a subtree?)
- Identify the dataset(s) backing the path: check mountpoint mapping and
zfs list. - Determine whether siblings are also affected (inheritance often hits siblings).
- Ask “what parent would unify these datasets?” That’s where the change likely happened.
Second: check property sources for the top suspects
- Mount problems:
mountpoint,canmount,readonly,overlay(if applicable), and collisions. - Performance regression:
recordsize,compression,atime,sync,logbias,primarycache,secondarycache. - Space surprises:
quota,refquota,reservation,refreservation.
Third: confirm the change is policy, not hardware
- Pool health:
zpool statusshould be clean; if not, you’re debugging the wrong problem. - Dataset tree diff: compare property sources between “good” and “bad” datasets.
- Look for recent admin activity: shell history, change management notes, automation commits.
Fourth: fix with the smallest blast radius
- If a single child needs a different behavior, set a local override on the child.
- If policy is wrong for many children, change the parent—but only after auditing the subtree.
- If you can’t safely decide, restore prior values (rollback the property change) and revisit with data.
Three corporate mini-stories from the inheritance mines
Mini-story 1: The incident caused by a wrong assumption
A mid-size SaaS company had a tidy ZFS layout: tank/apps for application state, with children
for Postgres, Redis, and a blob store. A new SRE was asked to “make backups easier to find” on a host
used by multiple teams. They noticed the parent dataset mounted at /srv/apps and wanted everything
under /srv for “consistency.”
They ran a single command: set mountpoint=/srv on tank/apps. The system didn’t explode
immediately because most mounts were already active. The change sat there like a banana peel.
That night, a routine reboot for kernel patching triggered remounts. Some datasets failed to mount due to
“busy” targets, others collided with existing directories, and systemd units raced ahead assuming paths existed.
The incident ticket read like a horror anthology: “Postgres won’t start,” “Redis data missing,” “application
can’t find uploads.” The pool was healthy. Disk IO looked fine. The problem was that paths weren’t where services
expected them.
The fix was straightforward but humbling: revert the parent mountpoint, set a local mountpoint only where needed,
and add a pre-reboot check that compares current mountpoints against an approved inventory. The real lesson
wasn’t “don’t touch mountpoint.” It was “don’t assume dataset properties behave like per-directory settings.”
The postmortem’s sharpest line: no one reviewed the blast radius. The command was correct. The assumption was not.
Mini-story 2: The optimization that backfired
Another org ran build artifacts and container layers on ZFS. They were fighting capacity: the pool kept creeping
toward uncomfortable usage. Someone proposed turning on compression at the top of the tree:
zfs set compression=lz4 tank. It’s a common recommendation. It’s also not free.
The deployment was done via automation. Within hours, monitoring showed CPU increasing on a subset of nodes.
Latency on a busy API also rose. The team assumed the API change was unrelated—until they correlated the nodes
with dataset layout. The API’s dataset was under tank too, but no one thought of it as “sharing”
anything with build artifacts.
Compression wasn’t the villain; it was the interaction. That API handled lots of small, already-compressed payloads.
Compression ratios hovered around 1.00x. Meanwhile, the extra CPU cycles competed with application threads during
peak traffic. On paper, it was “minor overhead.” In reality, it was overhead in the wrong place at the wrong time.
They recovered by setting compression=off locally for the API dataset and leaving it on elsewhere.
The backfire wasn’t that compression is bad. The backfire was treating the dataset tree like a static filing cabinet
instead of a shared performance domain.
The long-term improvement was policy: top-level inheritance only for properties that are safe and broadly beneficial,
and explicit “exception datasets” documented and tested. Compression stayed on for most things. The API got its own rules.
Mini-story 3: The boring but correct practice that saved the day
A finance-adjacent company ran ZFS for multi-tenant internal services. Their storage lead insisted on a dull ritual:
every quarter, run a recursive property audit and commit the output to the infrastructure repo as an artifact.
Not as a diagram. The raw zfs get -r output. Everyone complained it was bureaucratic.
Then a new platform rollout introduced a subtle change: a dataset parent got atime=on locally due to a
misguided “restore defaults” play. Nobody noticed during development because the test workloads were tiny.
In production, read-heavy services started generating extra writes. Latency rose. NVMe write amplification
ticked upward. Nothing catastrophic, just that slow, expensive burn.
The on-call engineer did the usual checks. Pool healthy. No errors. IO patterns looked “busy” but not failing.
Then they pulled the last quarterly property snapshot and diffed it against current state. The difference popped:
atime source changed at a parent. That immediately narrowed the scope to a subtree and a single property.
Fix: restore atime=off at the correct parent, confirm children inherited it, and add a guardrail in
automation to prevent accidental re-enabling. Downtime: none. Debug time: short. The boring practice paid for itself
in one incident.
The moral is unsexy: property inventories aren’t paperwork; they’re time machines. When you’re debugging,
a known-good baseline is a weapon.
Common mistakes: symptoms → root cause → fix
1) “Child dataset changed and nobody touched it”
Symptoms: behavior changes (performance, mount paths, snapshot visibility), but the child dataset shows no recent admin actions.
Root cause: property inherited from a parent that was modified.
Fix: run zfs get -o name,property,value,source on the child and trace the source. Either revert the parent change or set a local override on the child.
2) “After reboot, some datasets are missing”
Symptoms: services fail at boot, mountpoints empty, ZFS datasets show mounted=no.
Root cause: inherited mountpoint change, mountpoint collisions, or canmount=off set on a parent.
Fix: check zfs get -r canmount,mountpoint,mounted. Remove collisions; set local mountpoints on children that need unique paths; ensure parents used as containers are explicitly canmount=off only when intended.
3) “Performance regression after ‘harmless’ tuning”
Symptoms: higher latency, more IO, CPU increase; no pool errors.
Root cause: inherited recordsize, atime, or compression applied to workloads that dislike it.
Fix: audit property sources across the subtree. Apply per-workload local overrides. Avoid tuning at high-level parents unless you can justify it for every child.
4) “No space left on device, but the pool has space”
Symptoms: application sees ENOSPC; zpool list shows free space.
Root cause: dataset quota/refquota/reservation/refreservation limits; sometimes inherited expectations rather than inherited properties.
Fix: inspect zfs get quota,refquota,reservation,refreservation,available. Adjust limits, and alert on “available” at the dataset level, not just pool free space.
5) “Snapshots suddenly visible (or suddenly gone)”
Symptoms: apps traverse .zfs directories, backup tools behave oddly, compliance teams ask questions.
Root cause: inherited snapdir changed at a parent.
Fix: set snapdir=hidden or snapdir=visible deliberately at the correct policy boundary; verify recursively with zfs get -r snapdir.
6) “We turned on encryption at the parent; why aren’t existing children encrypted?”
Symptoms: new datasets are encrypted, older ones are not; audit flags it.
Root cause: encryption properties influence dataset creation and inheritance, but you can’t retroactively encrypt existing datasets in-place.
Fix: plan a send/receive migration to a new encrypted dataset tree; treat “inheritance” as a template for new datasets, not a time machine for old data.
Checklists / step-by-step plan
Plan A: Safely change a property on a parent dataset
-
Identify the subtree. List children and mounts.
cr0x@server:~$ zfs list -r -o name,mountpoint tank/apps NAME MOUNTPOINT tank/apps /srv/apps tank/apps/postgres /srv/apps/postgres tank/apps/redis /srv/apps/redisDecision: if you don’t recognize every dataset, stop. Unknown children are where surprises hide.
-
Audit current property values and sources recursively.
cr0x@server:~$ zfs get -r -o name,property,value,source compression,recordsize,atime,sync tank/apps NAME PROPERTY VALUE SOURCE tank/apps compression lz4 local tank/apps recordsize 128K local tank/apps atime on default tank/apps sync standard default tank/apps/postgres compression lz4 inherited from tank/apps tank/apps/postgres recordsize 16K local tank/apps/postgres atime on inherited from tank/apps tank/apps/postgres sync standard inherited from tank/appsDecision: if important children have local overrides, confirm they remain correct after your parent change.
-
Model the impact. List which datasets currently inherit that property from the parent.
cr0x@server:~$ zfs get -r -H -o name,source atime tank/apps | awk '$2 ~ /tank\/apps/ {print $1}' tank/apps/postgres tank/apps/redisDecision: if the impacted list contains “special workloads,” consider local overrides instead of changing the parent.
-
Make the change in a controlled window (or at least controlled conditions).
cr0x@server:~$ zfs set atime=off tank/appsDecision: if this is mount-related, schedule downtime; mountpoint changes are operationally loud.
-
Verify recursively and watch for outliers.
cr0x@server:~$ zfs get -r -o name,property,value,source atime tank/apps NAME PROPERTY VALUE SOURCE tank/apps atime off local tank/apps/postgres atime off inherited from tank/apps tank/apps/redis atime off inherited from tank/appsDecision: if a dataset stayed
atime=on, it has a local override; decide if that’s correct.
Plan B: Make a child immune to future parent changes
-
Identify which properties should be pinned locally. Typical:
recordsize,logbias,primarycache,syncfor databases (careful), ormountpoint. -
Set local overrides explicitly and document why.
cr0x@server:~$ zfs set recordsize=16K tank/apps/postgres cr0x@server:~$ zfs set primarycache=metadata tank/apps/postgres cr0x@server:~$ zfs get -o name,property,value,source recordsize,primarycache tank/apps/postgres NAME PROPERTY VALUE SOURCE tank/apps/postgres recordsize 16K local tank/apps/postgres primarycache metadata localDecision: local overrides are a contract. If you can’t justify them, you’re just making the future harder.
Plan C: Prevent surprises with a property baseline
-
Capture a recursive property snapshot for key datasets.
cr0x@server:~$ zfs get -r -o name,property,value,source all tank/apps > /var/tmp/zfs-props-tank-apps.txt cr0x@server:~$ wc -l /var/tmp/zfs-props-tank-apps.txt 842 /var/tmp/zfs-props-tank-apps.txtDecision: store it somewhere durable (repo artifact, config management store). If it lives only on the host, it will die with the host.
-
Diff before/after changes.
cr0x@server:~$ diff -u /var/tmp/zfs-props-tank-apps-before.txt /var/tmp/zfs-props-tank-apps-after.txt | head --- /var/tmp/zfs-props-tank-apps-before.txt +++ /var/tmp/zfs-props-tank-apps-after.txt @@ -tank/apps atime on default +tank/apps atime off localDecision: if the diff is larger than expected, stop and re-evaluate. Big diffs are where outages come from.
FAQ
1) How do I know if a dataset property is inherited?
Use zfs get and look at the source column. If it says “inherited from …”, that’s your answer.
2) If I set a property on a parent, does it always affect all children?
It affects children that don’t have a local override for that property, and only for properties that are inheritable.
Children with local values are insulated.
3) What’s the difference between zfs inherit and setting a property to its default value?
zfs inherit removes the local setting, causing the dataset to inherit from its parent (or fall back to default if no parent sets it).
Setting a value explicitly makes it local, even if it matches the default.
4) Why did changing mountpoint break things even though it’s “just a path”?
Because paths are contracts with services, configs, and boot order. Inheritance can shift an entire subtree’s mount layout.
It’s not just a rename; it’s a topology change.
5) Is it safe to enable compression=lz4 at the pool’s top dataset?
Often yes, but “safe” depends on workload and CPU headroom. Audit compress ratios and CPU usage.
If a dataset stores mostly pre-compressed data, set a local exception.
6) Should I tune recordsize at a high-level parent?
Usually no. Use parent-level recordsize only when children are homogeneous.
For mixed workloads, pin recordsize locally per dataset that represents a workload.
7) Can I retroactively encrypt existing datasets by setting encryption properties on the parent?
No. You typically need a send/receive migration to a new encrypted dataset.
Parent encryption properties guide creation/inheritance for new datasets, not in-place conversion.
8) How do I find which dataset a directory is on when mounts are confusing?
Check the ZFS mount table via zfs mount and compare mountpoints, or use the OS mount tools.
Then query properties on that dataset specifically.
9) Why do two datasets show the same property value but behave differently later?
Same value, different source. A value from default won’t change unless you change it.
A value inherited from a parent can change whenever the parent changes.
10) How do I keep inheritance but avoid surprises?
Put policy boundaries where the org boundaries are: one parent per class of workload.
Then run recursive audits and keep a baseline so you can prove what changed.
Conclusion: next steps you can actually do
ZFS property inheritance is not an academic detail. It’s a production control plane. It can save you from config sprawl,
and it can absolutely kneecap you when a “small change” turns out to be a subtree-wide policy shift.
Do these next:
- Pick three critical dataset trees (databases, backups, and whatever fills disks) and run a recursive property audit. Save it.
- Mark workload datasets explicitly and set local overrides where the workload truly differs (especially
recordsize,atime, mount properties). - Adopt the blast-radius habit: before changing a parent, list all children inheriting that property and make a conscious decision.
- Operationalize drift detection: diff your saved baseline against current properties after every change window.
If you only remember one thing: never look at a ZFS property value without looking at its source.
That’s where the surprise is hiding.