If zfs send is the export department, zfs receive is customs: everything looks fine until someone notices the paperwork. Properties are that paperwork. Ignore them and you don’t just get a mildly messy import—you get datasets mounted where they shouldn’t be, encryption keys you can’t load, “successful” replications that don’t boot, and performance that looks like a storage system doing interpretive dance.
I’ve run ZFS replication in boring enterprises, frantic SaaS shops, and the kind of corporate basement where “DR plan” means “hope.” The pattern repeats: teams treat receive as a dumb pipe. It’s not. zfs receive is where ZFS decides what your data means on the target: where it will mount, how it will compress, whether it will even be readable, and what happens when the stream includes properties you didn’t plan for.
What zfs receive really does (and why properties are the trap)
zfs receive takes a replication stream and reconstitutes datasets, snapshots, and optionally properties. That last part is the quiet killer: properties are not just nice-to-haves like “compression=lz4.” Properties include mountpoints, canmount behavior, encryption metadata, quotas, reservations, recordsize, ACL mode, xattr storage, and a pile of platform-specific stuff that may not even mean the same thing on the target OS.
The wrong assumption is that replication streams are “data only.” In reality, a typical ZFS stream can include:
- The dataset’s content blocks.
- Snapshot metadata.
- Dataset properties (depending on send flags and ZFS implementation).
- Feature flags and compatibility expectations.
Then zfs receive applies the stream into an existing pool namespace, where your target’s local policies (and your target’s existing datasets) might disagree with the source.
In operations, “disagree” doesn’t mean a friendly warning. It means:
- A dataset mounts on top of a real directory and hides data you needed.
- A property inherited on the source becomes local on the target (or vice versa) and you spend hours chasing why behavior diverged.
- An encrypted dataset arrives without the key material you expected.
- Replication “works” but restore is broken because you replicated a mountpoint that only makes sense on the source host.
Joke #1: Treating zfs receive as “just the import side” is like treating airbags as “just the inflatable side.” You only notice you were wrong at speed.
Facts and historical context that actually matter in production
- ZFS was designed with replication as a first-class primitive. Snapshots and send/receive were part of the original Sun-era vision: consistent point-in-time transfer without filesystem freeze gymnastics.
- Properties are part of the filesystem contract. ZFS uses properties to drive behavior that other filesystems leave to mount options or external tooling.
- OpenZFS diverged across platforms for years. Behavior around ACLs, xattrs, and some property defaults can vary between illumos, FreeBSD, and Linux—especially in older deployments.
- Feature flags changed upgrade risk. Pools and datasets gained feature flags that allow rolling upgrades and safer activation, but also create “can’t import on older system” surprises if you replicate blindly to a legacy target.
- Encryption arrived later than snapshots. Native encryption is modern OpenZFS; many estates still mix “legacy encrypted at rest” (self-encrypting drives, LUKS) with ZFS-native encryption, which affects replication assumptions.
lz4became the default compression favorite for a reason. It’s fast enough to be “usually on” without the operational shame of wasting CPU on restores.- Resume tokens exist because long transfers fail. ZFS replication at scale hits flaky networks, maintenance windows, and human impatience; resume tokens turned “start over” into “continue.”
mountpointis a property, not a mount(8) option. That design is powerful, but it means replication can transport your mount layout—whether you want it or not.- Recordsize and volblocksize are performance levers with consequences. They can make databases fly or make small-file workloads cry, and replication can carry those choices across environments unintentionally.
A practical mental model: stream, dataset, properties, and mounts
When you run:
cr0x@server:~$ zfs send poolA/app@weekly | zfs receive poolB/restore/app
you’re not “copying files.” You’re applying a transaction log that rebuilds a dataset tree. That has three layers of consequence:
1) Namespace: where does the dataset land?
poolB/restore/app is the dataset name you receive into. But inside the stream, the dataset(s) may have their own names and properties, and if you use replication flags that include child datasets, the receive side may create an entire subtree under your target path.
2) Properties: what behavior comes along for the ride?
Properties can be set as local values or inherited. Replication can preserve those choices. That’s good when you want a faithful mirror and terrible when you’re landing the data in a different operational environment (DR, dev, analytics) with different mountpoints, quotas, and performance tuning.
3) Mount behavior: when does it mount and where?
By default, a received filesystem may mount automatically (depending on canmount, mountpoint, and the receive flags). That can collide with existing paths and services. “Why did my application suddenly read old config?” is a common postmortem theme.
Property behavior on receive: inherited, local, and “surprise, it changed”
The key to staying sane is to separate two questions:
- What properties are in the stream? (decided on the send side, and by whether you’re doing a raw send, replication send, etc.)
- What properties are applied on the target? (decided by receive options and the target’s existing dataset tree)
Properties that commonly cause real outages
- mountpoint: replicates a path that may not exist or may be in use on the target.
- canmount: a dataset that should not mount gets mounted (or vice versa), changing what services see.
- sharenfs / sharesmb: suddenly you’re exporting data on a network you didn’t mean to touch.
- atime: changes metadata churn; can turn a read-heavy workload into unexpected writes.
- recordsize: harmless for many cases, but lethal for DBs and VM images when it’s wrong.
- xattr and acltype: cross-platform differences can make permissions look “fine” until an app starts failing.
- quota/reservation/refquota/refreservation: replicated limits can brick a restore by instantly “filling” the dataset from the target’s perspective.
Overriding vs excluding properties on receive
Operationally, you usually want one of these patterns:
- Faithful mirror (DR mirror you might promote): preserve properties, but ensure the namespace/mount strategy is safe (often receive with
-uand then explicitly set mountpoints). - Data landing zone (analytics/dev/test): override mountpoints, disable shares, and apply local performance policy (compression, recordsize) appropriate for the target.
If your ZFS supports property exclusion/override flags on receive (common in modern OpenZFS), they are your safety rails. The exact flags vary by implementation/version, so validate in your environment with zfs receive -?. In this article, I’ll show patterns that work broadly and call out where version differences matter.
Encryption on receive: keys, raw sends, and the “why can’t I mount this” genre
ZFS-native encryption introduces a concept non-encrypted admins often miss: the dataset can exist and replicate perfectly while being unusable until keys are handled correctly. This is not a bug; it’s the point.
Two replication modes that behave very differently
Non-raw send (or sends that effectively decrypt/re-encrypt depending on tooling) can allow the receiver to store data in a way that’s not a byte-for-byte encrypted clone. This may require keys on the source and may leak properties you didn’t intend.
Raw send (zfs send -w in many OpenZFS builds) transmits the encrypted dataset as encrypted blocks plus enough metadata to preserve encryption. The receiver does not need the plaintext, and the encryption root/parameters are preserved. This is the mode you want for “replicate to an untrusted DR site” or “backup appliance” setups.
Key handling on the target is an operational workflow
Receiving an encrypted dataset without a plan for keylocation and keyformat is how you create the world’s most reliable paperweight. You’ll run zfs list, see the datasets, and then everything fails at mount time.
Joke #2: An encrypted dataset without a key is the purest form of write-only storage. Auditors love it; users do not.
Mountpoints and automounting: how people DOS themselves with success
The most common “receive broke production” scenario is not corruption. It’s a mount collision. A receive happens, the dataset mounts automatically, and suddenly:
/varon the target is shadowed by a replicated mountpoint.- A directory with live data is hidden behind the mount, making services read stale configs or fail to find files.
- A restore environment unexpectedly shares data over NFS/SMB because share properties came along.
The fix is usually simple—receive without mounting, then explicitly set mountpoints and canmount before you load services. The hard part is remembering to do it when you’re tired, it’s 02:00, and management is asking for ETAs like it’s a sport.
Real operational tasks (commands + interpretation)
Below are practical tasks I’ve actually used (or watched someone wish they had used) in production ZFS replication workflows. Commands assume a typical OpenZFS environment; adjust pool/dataset names to your reality.
Task 1: Inspect what properties matter on the source before you send
cr0x@src:~$ zfs get -r -o name,property,value,source mountpoint,canmount,compression,recordsize,quota,refquota,sharenfs,sharesmb poolA/app
NAME PROPERTY VALUE SOURCE
poolA/app mountpoint /srv/app local
poolA/app canmount on default
poolA/app compression lz4 local
poolA/app recordsize 128K default
poolA/app quota none default
poolA/app sharenfs off default
Interpretation: This tells you what you’re about to replicate as behavior. If mountpoint is local and points to a path that doesn’t exist on the target, decide now whether to override it.
Task 2: Dry-run your receive plan by checking the target namespace and conflicts
cr0x@dst:~$ zfs list -o name,mountpoint,canmount -r poolB/restore
NAME MOUNTPOINT CANMOUNT
poolB/restore /poolB/restore on
poolB/restore/app - -
Interpretation: If poolB/restore/app already exists (or if a parent dataset has an inherited mountpoint that would cause a collision), fix the namespace before receiving.
Task 3: Receive without mounting to avoid collisions
cr0x@src:~$ zfs send poolA/app@weekly | ssh dst 'zfs receive -u poolB/restore/app'
Interpretation: -u keeps it unmounted even if the stream contains mountpoint/canmount settings that would normally mount it. This is the safest default for restores and DR seeding.
Task 4: Force a safe mountpoint after receive (landing zone pattern)
cr0x@dst:~$ zfs set mountpoint=/srv/restore/app poolB/restore/app
cr0x@dst:~$ zfs set canmount=noauto poolB/restore/app
cr0x@dst:~$ zfs mount poolB/restore/app
Interpretation: You’ve made mounting explicit and predictable. noauto prevents surprise mounts on boot/import; you control when services see the data.
Task 5: Receive a full subtree (replication) into a prefixed path
cr0x@src:~$ zfs send -R poolA/app@weekly | ssh dst 'zfs receive -u poolB/restore'
Interpretation: This can create multiple datasets under poolB/restore. Great for full application trees; dangerous if you weren’t expecting children with their own mountpoints and share properties.
Task 6: Validate the received snapshots and space use
cr0x@dst:~$ zfs list -t snapshot -o name,used,refer,mountpoint -r poolB/restore/app | head
NAME USED REFER MOUNTPOINT
poolB/restore/app@weekly 0B 220G /srv/restore/app
poolB/restore/app@daily-2025-12 0B 218G /srv/restore/app
Interpretation: Snapshots present, refer sizes look sane. If USED is massive unexpectedly, you may have received extra snapshots or mismatched incrementals later.
Task 7: Incremental send/receive with a clear base snapshot
cr0x@src:~$ zfs send -I poolA/app@weekly poolA/app@daily | ssh dst 'zfs receive -u poolB/restore/app'
Interpretation: -I sends all intermediary snapshots between weekly and daily. If the target is missing the base snapshot, this will fail. That failure is healthy—it prevents silent divergence.
Task 8: Handle a dataset that already exists on the target (force receive)
cr0x@src:~$ zfs send poolA/app@weekly | ssh dst 'zfs receive -F -u poolB/restore/app'
Interpretation: -F rolls back the target dataset to the latest snapshot matching the stream and discards divergent changes. This is powerful and dangerous: if someone wrote new data on the target, it’s gone.
Task 9: Use resume tokens when a long receive gets interrupted
cr0x@dst:~$ zfs get -H -o value receive_resume_token poolB/restore/app
1-2f4c9e1b7a-8000000000-1a2b3c4d5e6f...
cr0x@dst:~$ zfs send -t 1-2f4c9e1b7a-8000000000-1a2b3c4d5e6f... | zfs receive -u poolB/restore/app
Interpretation: If your environment supports resume tokens, you can continue without resending from scratch. If the token is - or empty, resume isn’t available or isn’t active.
Task 10: Confirm encryption state and key requirements after receive
cr0x@dst:~$ zfs get -o name,property,value encryption,keylocation,keystatus -r poolB/restore/app
NAME PROPERTY VALUE SOURCE
poolB/restore/app encryption aes-256-gcm -
poolB/restore/app keylocation file:///... local
poolB/restore/app keystatus unavailable -
Interpretation: Dataset exists but keys aren’t loaded. You can’t mount it until you load keys. If keylocation is wrong for the target, fix it before attempting mounts.
Task 11: Load keys and mount safely (encrypted datasets)
cr0x@dst:~$ zfs set keylocation=prompt poolB/restore/app
cr0x@dst:~$ zfs load-key poolB/restore/app
Enter passphrase for 'poolB/restore/app':
cr0x@dst:~$ zfs mount poolB/restore/app
Interpretation: Switching to prompt is a common operational choice for DR restores. For unattended boot, you might use a file-based keylocation, but that’s a separate security conversation you should have intentionally.
Task 12: Measure whether the receive is bottlenecked by CPU, disk, or network
cr0x@dst:~$ zpool iostat -v 1
capacity operations bandwidth
poolB alloc free read write read write
---------------------------- ----- ----- ----- ----- ----- -----
poolB 8.2T 10.1T 0 1200 0 450M
raidz2-0 8.2T 10.1T 0 1200 0 450M
sda - - 0 150 0 58M
sdb - - 0 148 0 57M
Interpretation: If disk writes are high and steady but receive is slow, you may be CPU-bound on decompression/checksumming, or limited by sync settings. If writes are low, look at network/SSH or the send side.
Task 13: Catch mountpoint collisions before they bite you
cr0x@dst:~$ zfs get -r -o name,property,value,source mountpoint,canmount poolB/restore | grep -E '(/var|/usr|/home|/srv)'
poolB/restore/app mountpoint /srv/app local
Interpretation: That mountpoint is a problem if /srv/app is where the real production app lives on the target. Fix it before mounting.
Task 14: Verify what changed during receive by comparing property sources
cr0x@dst:~$ zfs get -o name,property,value,source -s local,inherited compression,recordsize,atime,acltype,xattr poolB/restore/app
NAME PROPERTY VALUE SOURCE
poolB/restore/app compression lz4 local
poolB/restore/app atime off inherited
poolB/restore/app xattr sa local
Interpretation: This is how you spot “it’s different on DR” issues without guesswork. If a property is inherited from a parent on the target, you may need to set it locally to match expectations.
Fast diagnosis playbook (what to check first, second, third)
This is the checklist I use when replication is slow, broken, or “successful but wrong.” The goal is to identify the limiting layer within minutes, not hours.
First: Is it the wrong dataset state (properties/mount/encryption)?
cr0x@dst:~$ zfs list -o name,type,mountpoint,canmount -r poolB/restore/app
cr0x@dst:~$ zfs get -o name,property,value,source mountpoint,canmount,readonly,sharenfs,sharesmb poolB/restore/app
cr0x@dst:~$ zfs get -o name,property,value encryption,keystatus,keylocation poolB/restore/app
Interpretation: If it won’t mount, encryption and mountpoint are prime suspects. If it mounted somewhere unexpected, mountpoint/canmount and inherited properties are usually the culprit.
Second: Is the stream relationship wrong (incremental base missing, rollback needed, resume token)?
cr0x@dst:~$ zfs list -t snapshot -o name -r poolB/restore/app | tail
cr0x@dst:~$ zfs get -H -o value receive_resume_token poolB/restore/app
Interpretation: Incremental failures often come down to “target doesn’t have the base snapshot” or “target has diverged.” Resume token presence tells you whether you can continue.
Third: Where’s the bottleneck (disk, CPU, network, sync)?
cr0x@dst:~$ zpool iostat 1
cr0x@dst:~$ iostat -xz 1
cr0x@dst:~$ vmstat 1
Interpretation: You’re looking for a saturated resource: disks pegged, CPU pegged (often checksum/compress/decrypt), or a sleepy pipeline because SSH/network is the limiter. If writes stall with high latency, check pool health and recordsize/workload mismatch.
Common mistakes: specific symptoms and fixes
Mistake 1: Receiving into a path that auto-mounts over real data
Symptoms: Services “lose” files, config disappears, directories look empty, or old data suddenly appears. mount shows a new ZFS filesystem mounted at a critical path.
Fix: Immediately unmount the received dataset and set a safe mountpoint.
cr0x@dst:~$ zfs unmount poolB/restore/app
cr0x@dst:~$ zfs set mountpoint=/srv/restore/app poolB/restore/app
cr0x@dst:~$ zfs set canmount=noauto poolB/restore/app
Mistake 2: Incremental receive fails with “does not exist” or “most recent snapshot does not match”
Symptoms: Receive errors out; target snapshot list doesn’t contain the incremental base; or the target has extra snapshots not on source.
Fix: Re-seed with the correct base snapshot, or force rollback if you intentionally want to overwrite divergence.
cr0x@src:~$ zfs send poolA/app@weekly | ssh dst 'zfs receive -u poolB/restore/app'
cr0x@src:~$ zfs send -i poolA/app@weekly poolA/app@daily | ssh dst 'zfs receive -u poolB/restore/app'
If divergence exists and target changes are disposable:
cr0x@src:~$ zfs send -i poolA/app@weekly poolA/app@daily | ssh dst 'zfs receive -F -u poolB/restore/app'
Mistake 3: Quotas/reservations replicate into DR and block restores
Symptoms: You receive successfully, but writes fail immediately with “out of space” despite plenty of pool free space. zfs get quota shows a restrictive value.
Fix: Override quotas/reservations on the target after receive (or exclude them during receive where supported).
cr0x@dst:~$ zfs get -o name,property,value quota,refquota,reservation,refreservation poolB/restore/app
cr0x@dst:~$ zfs set quota=none refquota=none reservation=none refreservation=none poolB/restore/app
Mistake 4: Receiving encrypted datasets but not planning keylocation/keystatus
Symptoms: Dataset exists; mounts fail; keystatus=unavailable.
Fix: Set correct keylocation and load keys, then mount.
cr0x@dst:~$ zfs set keylocation=prompt poolB/restore/app
cr0x@dst:~$ zfs load-key poolB/restore/app
cr0x@dst:~$ zfs mount poolB/restore/app
Mistake 5: “Optimization” by changing recordsize/compression on target mid-stream
Symptoms: Receive becomes slower; fragmentation increases; application performance changes unpredictably after promotion.
Fix: Treat performance properties as part of the system design. Decide policy per dataset role (DB vs logs vs VM images) and set it consistently—preferably before first ingest or with controlled rewrite windows.
cr0x@dst:~$ zfs set recordsize=16K poolB/restore/db
cr0x@dst:~$ zfs set compression=lz4 poolB/restore/db
Mistake 6: Accidentally replicating share properties into the wrong network zone
Symptoms: Data appears on the network unexpectedly; compliance team calls; firewall logs light up.
Fix: Ensure sharenfs/sharesmb are off on receive targets (or inherited off from a parent dataset).
cr0x@dst:~$ zfs set sharenfs=off sharesmb=off poolB/restore
cr0x@dst:~$ zfs inherit -r sharenfs poolB/restore/app
cr0x@dst:~$ zfs inherit -r sharesmb poolB/restore/app
Checklists / step-by-step plan
Plan A: Safe restore into a new environment (recommended default)
- Create a landing parent dataset with safe inherited properties (no sharing, no auto-mount surprises).
- Receive with
-uto prevent auto-mount. - Override mountpoints to a restore prefix.
- Handle encryption keys explicitly (prompt or file-based policy).
- Verify snapshots and property sources before exposing data to services.
cr0x@dst:~$ zfs create -o canmount=off -o mountpoint=/srv/restore poolB/restore
cr0x@dst:~$ zfs set sharenfs=off sharesmb=off poolB/restore
cr0x@src:~$ zfs send -R poolA/app@weekly | ssh dst 'zfs receive -u poolB/restore'
cr0x@dst:~$ zfs set mountpoint=/srv/restore/app poolB/restore/app
cr0x@dst:~$ zfs set canmount=noauto poolB/restore/app
Plan B: DR mirror you may promote (faithful, but controlled)
- Receive unmounted.
- Preserve properties unless you have a policy exception.
- Keep mountpoints “DR-safe” by receiving under a dedicated root and only switching mountpoints at promotion time.
- Test promotion in rehearsals (mount, load keys, start services) with runbooks that assume humans make mistakes under stress.
Plan C: Long-haul replication over flaky links
- Prefer resumable receive where supported.
- Track resume tokens and avoid killing receives unless you mean it.
- Measure bottlenecks with
zpool iostatand system CPU metrics before “tuning.”
Three corporate-world mini-stories
Mini-story 1: The incident caused by a wrong assumption (mountpoints are “local only”)
A corporate infra team built a new DR environment. The plan looked solid: replicate the production datasets nightly, keep them offline, and in a disaster just “mount and go.” They tested the replication mechanics—streams flowed, snapshots appeared, and storage graphs looked calm. Everyone went home early, which is always suspicious.
Months later, a routine DR rehearsal turned into a production incident, which is the most efficient kind of rehearsal. A junior engineer ran a receive into a pool that also hosted a staging environment. The dataset arrived with a mountpoint=/var/lib/app inherited from production. On the DR host, that directory existed and had staging data. The receive mounted automatically and hid the staging directory. Services didn’t crash immediately; they quietly started reading older replicated files that happened to look valid.
The team’s first instinct was to debug the application. They tailed logs, rolled containers, blamed DNS, and stared at dashboards until someone finally ran mount and saw a ZFS filesystem sitting on top of a path that should never have been touched.
The fix took minutes: unmount, receive with -u, set a safe mountpoint, then mount explicitly. The postmortem took longer, because the wrong assumption (“mountpoint is local config”) had been institutionalized into scripts and tribal knowledge. The real corrective action was cultural: treat properties as part of replicated state, and always land restores into a namespace that cannot collide with live paths.
Mini-story 2: The optimization that backfired (turning knobs mid-flight)
A different org had a replication pipeline that was “too slow.” Overnight incrementals occasionally ran into business hours, which offended a VP who liked clean lines on charts. The storage team decided to speed things up by changing compression settings and recordsize on the target, reasoning that “receive just writes blocks; target properties will make it faster.”
They set aggressive compression and adjusted recordsize for datasets that held VM images. On paper, smaller blocks and stronger compression sounded like less IO and faster transfers. In practice, the receive side became CPU-bound. The target nodes weren’t sized for heavy compression/decompression work, and the replication window got worse. Meanwhile, VM image datasets started fragmenting in ways that made later promotions sluggish and unpredictable.
The best part: because receives were still “successful,” nobody noticed immediately. The pain showed up weeks later during a planned failover test, when boot storms took longer and storage latency spiked. The team had optimized for the replication pipeline, but accidentally degraded the operational workload that actually mattered.
The recovery was boring: revert to lz4, align recordsize with workload reality, and separate the goals. If you want faster replication, address the pipeline (network, SSH cipher choice, send flags, concurrency) and the pool design (vdev layout, sync policy), not by randomly altering dataset semantics midstream.
Mini-story 3: The boring but correct practice that saved the day (receive unmounted, then promote deliberately)
The most resilient ZFS shop I’ve seen wasn’t the one with the fanciest hardware. It was the one with the dullest replication discipline: every receive landed unmounted under a dedicated root dataset with “safe defaults.” No sharing. No automount. Clear naming. And every promotion had a runbook with explicit property checks.
During a real incident—storage controller trouble on the primary—the team decided to promote DR. The pressure was real, but the steps were mechanical: load encryption keys, verify snapshot presence, set mountpoints to the production paths, flip canmount, mount, start services. They did not “just mount everything and see what happens,” which is the storage equivalent of free solo climbing.
They still hit problems, because everyone does. One dataset had a stale quota replicated from a long-ago capacity experiment. But because they always ran a property diff checklist before promotion, they spotted it before the application started writing. They cleared the quota, mounted, and moved on.
The takeaway wasn’t heroics. It was that boring process scales under stress. A DR runbook that assumes property landmines is worth more than a thousand lines of clever replication scripts.
FAQ
1) Why does zfs receive care about properties at all?
Because in ZFS, properties define behavior that other filesystems externalize. Replicating data without behavior is often a broken restore: wrong mountpoints, wrong performance profile, wrong permissions model.
2) Should I always preserve properties when replicating?
No. Preserve properties for a true mirror you might promote. Override/exclude properties for landing zones (dev/analytics/forensics) where the source’s mount layout, shares, and quotas don’t apply.
3) What’s the safest default receive flag in an unfamiliar environment?
-u (don’t mount). It prevents the most common class of self-inflicted outages: mountpoint collisions and surprise service interactions.
4) When should I use zfs receive -F?
When you intend to overwrite the target’s divergent state and you accept losing any changes made on the target since the common snapshot. It’s appropriate for DR mirrors where the target is not a writable primary.
5) Why did my incremental receive fail even though the dataset exists?
Incrementals are snapshot-graph dependent. The target must have the exact base snapshot (or snapshot chain) referenced by the stream. If the target has diverged or the base snapshot was pruned, receive fails to prevent corruption-by-assumption.
6) How do encryption keys affect replication?
Encrypted datasets can replicate as encrypted (raw) or in a mode that involves plaintext on the sender. On the receiver, the dataset can exist but remain unmountable until keys are loaded. Always validate keystatus and decide how keylocation should behave in DR.
7) Why is my receive slow?
Most slow receives are bottlenecked by one of: disk write throughput/latency, CPU (checksum/compress/decrypt), network/SSH overhead, or synchronous write settings. Measure with zpool iostat, iostat, and CPU metrics before changing dataset properties.
8) Can I replicate from one OS to another safely?
Usually, yes, but watch ACL and xattr-related properties, feature flags, and default behaviors. Treat cross-platform replication as a compatibility project: validate permissions semantics with real application tests, not just ls -l.
9) Is it okay to change mountpoints after receive?
Yes, and it’s often the right move. Receive into a safe namespace, then set mountpoints as part of promotion. This keeps replication mechanically consistent while preventing collisions.
10) How do I avoid replicating “dangerous” properties like shares?
Set safe inherited defaults on the parent dataset on the target (shares off), receive unmounted, and then explicitly configure any sharing you actually want. If your ZFS supports excluding properties on receive, use it—but still keep the parent-safety pattern as a backstop.
Conclusion
zfs receive isn’t a passive endpoint. It’s where your replicated data becomes a real filesystem with real behaviors, and those behaviors are governed by properties—some obvious, some subtle, and some that only show up when you’re restoring under pressure.
If you take one operational rule from this: receive unmounted into a safe namespace, then apply deliberate property policy before you mount or share anything. That single habit prevents the classic mountpoint collision outage, makes encryption restores predictable, and turns replication from a hopeful ritual into an engineering system you can trust.