You’ve got data on a ZFS box that’s newer than the destination. Or the source is old and creaky, and the new target is strict about features.
Either way, your first “zfs send | zfs receive” looks confident right up until it fails at 2 a.m. with an error message that reads like a shrug.
This is the operational reality: ZFS replication is powerful, but it’s not magic. Send streams carry assumptions—about dataset features, pool
feature flags, encryption modes, and even whether your destination understands the dialect you’re speaking. This guide is how you stop guessing.
A mental model that actually predicts failures
ZFS replication is two separate contracts layered together:
-
The pool contract: can the destination pool even represent the on-disk structures required by the incoming dataset state?
This is where pool feature flags and ZFS “versions” matter. -
The stream contract: is the send stream format understandable by the destination’s ZFS implementation, and does it
contain record types it can apply? This is where “newer send stream features” and flags like-L,-c,-w,
-e, or--rawcan trip you.
Those contracts are related but not identical. You can have a destination pool that supports all needed feature flags, yet still fail because
you used a stream flavor the receiver doesn’t know how to parse. Or the inverse: the receiver understands the stream, but the destination pool
can’t activate a required feature safely.
Stop thinking in “ZFS versions”
People ask: “Can ZFS 2.1 send to ZFS 0.8?” That question is understandable and often useless. In OpenZFS land, what matters is:
- Pool feature flags (
zpool get all,zpool status,zpool get feature@*). - Dataset features and properties (encryption, large blocks, embedded data, redaction, etc.).
- Stream features negotiated by the receiver (what
zfs receivecan accept). - What you asked
zfs sendto do (raw vs decrypted, compressed vs not, replication recursion, properties).
Operationally, compatibility is a three-way intersection: source pool capabilities, destination pool capabilities, and the chosen send stream format.
If any one is “too new” for the others, replication fails. Or worse: it succeeds but forces decisions you didn’t realize you made (like losing encryption
or inheriting mountpoints you didn’t want in production).
One paraphrased idea from Jeff Bezos (reliability-adjacent, and painfully true for ops): “Be stubborn on vision, flexible on details.” For ZFS replication,
your vision is data integrity. Your “details” are stream flags. Be flexible there.
Interesting facts and history that matter in production
- ZFS originally used “pool versions” (a single integer). Modern OpenZFS moved to feature flags so pools can evolve without a monolithic version bump.
- Feature flags are per-pool, not per-host. Upgrading the OS doesn’t upgrade the pool until you explicitly enable features (often via
zpool upgrade). - Once a pool feature is active, you usually can’t go back. “Downgrade” is typically “restore from replication into an older pool,” not an in-place reversal.
- Send streams can include more than blocks: properties, snapshots, clones, and sometimes assumptions about mountpoints and holds.
- Encrypted datasets changed replication semantics. Raw sends preserve encryption and keys stay on the source; non-raw sends can silently deliver plaintext.
- Bookmarks exist to make incremental replication robust without keeping old snapshots forever, but you need receiver support and disciplined naming.
- Compressed send is not “compression on the wire” only. Depending on flags and versions, it can carry already-compressed blocks; great for bandwidth, confusing for compatibility.
- Large block support isn’t just a property. It interacts with pool feature flags and receiver behavior; mismatches can produce receive errors that look like corruption.
- OpenZFS is cross-platform, but not perfectly uniform. FreeBSD, illumos derivatives, and Linux generally align, but edge features arrive at different times.
Joke #1: ZFS compatibility discussions are like family group chats—everyone swears they’re speaking the same language, and nobody is.
Compatibility rules: what must match vs what can differ
Rule 1: the destination must support all on-disk features required by the incoming dataset state
If the send stream represents a dataset state that requires a feature the destination pool cannot enable, the receive will fail. This isn’t negotiable.
It doesn’t matter how polite your send flags are.
Key nuance: a pool can have features “enabled” but not “active.” An “enabled” feature is available for use; “active” means it’s already used in the pool’s metadata.
Replication can trigger activation when it materializes structures that require it.
Rule 2: the receiver must understand the stream format you generate
Some send stream enhancements are purely additive; older receivers may choke on unknown record types. If your destination is older, you should assume
you need to keep the stream conservative unless you’ve verified receiver support.
Rule 3: encryption is a compatibility multiplier
Encrypted datasets are where “it worked in the lab” goes to die in production. The critical decision is whether you are sending:
- Raw encrypted (preserves encryption, requires receiver support for raw receive and encryption features).
- Decrypted (data arrives as plaintext; may be acceptable for migration into a new encryption domain, but it’s a security decision, not a convenience).
If the destination is older and can’t do encrypted receive properly, you either upgrade it, or you accept plaintext replication and re-encrypt at rest on the destination.
Pretending there’s a third option is how you end up with “temporary” unencrypted backups in the wrong place.
Rule 4: incremental streams require shared history
Incremental send relies on a common snapshot (or bookmark) existing on both sides. If the destination lost it, renamed it, or never received it, incremental send fails.
The fix is usually to re-seed with a full send or to use a properly managed bookmark chain.
Rule 5: recursion and properties can be dangerous
zfs send -R is convenient. It also happily replicates properties you might not want on the destination: mountpoints, sharenfs/smb,
quota reservations, and oddball legacy settings. Treat -R as a production change, not a copy operation.
Practical tasks: commands, outputs, and decisions
These are the checks I actually run before and during migrations. Each includes what the output means and the decision you make from it.
Run them on both ends. Compare. Don’t assume.
Task 1: Identify ZFS implementation and version (Linux)
cr0x@server:~$ modinfo zfs | egrep 'version:|srcversion:'
version: 2.2.2-1
srcversion: 1A2B3C4D5E6F7G8H9I0J
Meaning: This tells you the OpenZFS module version on Linux. It’s not the whole story, but it anchors expectations.
Decision: If destination is significantly older, plan for conservative streams and expect feature mismatches.
Task 2: Identify ZFS userland version
cr0x@server:~$ zfs version
zfs-2.2.2-1
zfs-kmod-2.2.2-1
Meaning: Kernel module and userland should be reasonably aligned. Big mismatches can cause weird behavior.
Decision: If they’re out of sync, fix that first. Debugging replication while your stack is half-upgraded is a hobby, not a job.
Task 3: Check pool feature flags on the source
cr0x@server:~$ zpool get -H -o name,property,value all tank | head
tank size 23.8T
tank capacity 61%
tank health ONLINE
tank ashift 12
tank autotrim off
Meaning: Basic sanity, and it proves the pool is readable. Not enough for compatibility yet.
Decision: If pool health isn’t ONLINE, stop and fix that before migrating. Replication is not a repair strategy.
Task 4: Enumerate feature flags explicitly
cr0x@server:~$ zpool get -H -o property,value feature@* tank | egrep 'active|enabled' | head
feature@async_destroy active
feature@bookmarks active
feature@embedded_data active
feature@extensible_dataset active
feature@encryption active
feature@device_removal enabled
Meaning: “active” means used; “enabled” means available. Anything active on the source is a red flag for older destinations.
Decision: Compare this to the destination pool’s supported features. If destination doesn’t support an active feature, you must upgrade destination or migrate via a different path.
Task 5: Check which features the destination pool supports
cr0x@server:~$ zpool get -H -o property,value feature@* backup | head
feature@async_destroy enabled
feature@bookmarks enabled
feature@embedded_data enabled
feature@encryption disabled
feature@extensible_dataset enabled
feature@filesystem_limits enabled
Meaning: If a required feature is disabled because the implementation doesn’t know it, you’re stuck.
If it’s disabled but known, you might be able to enable it.
Decision: If the source requires feature@encryption active and destination shows feature@encryption disabled, either upgrade destination OpenZFS or plan a decrypted migration (with eyes open).
Task 6: Confirm dataset encryption state and key handling
cr0x@server:~$ zfs get -o name,property,value -H encryption,keylocation,keystatus tank/prod
tank/prod encryption aes-256-gcm
tank/prod keylocation file:///root/keys/prod.key
tank/prod keystatus available
Meaning: You’re dealing with native ZFS encryption. Replication choices matter.
Decision: If you need encryption preserved, plan for raw send and ensure destination supports it. If you accept re-encrypting, plan plaintext send and destination-side encryption.
Task 7: List snapshots and verify a common base exists
cr0x@server:~$ zfs list -t snapshot -o name,creation -s creation tank/prod | tail -3
tank/prod@replica_2025-12-20 Sat Dec 20 02:00 2025
tank/prod@replica_2025-12-21 Sun Dec 21 02:00 2025
tank/prod@replica_2025-12-22 Mon Dec 22 02:00 2025
Meaning: This shows the snapshot chain available for incremental sends.
Decision: Confirm the destination also has the base snapshot you’ll use. If not, you can’t do an incremental without reseeding.
Task 8: Verify destination has the base snapshot
cr0x@server:~$ zfs list -t snapshot -o name backup/prod | head -3
NAME
backup/prod@replica_2025-12-20
backup/prod@replica_2025-12-21
Meaning: Good: shared history exists.
Decision: Proceed with incremental from the newest common snapshot. If you’re missing it, stop and reseed or use a bookmark strategy.
Task 9: Estimate send size before you blast the WAN
cr0x@server:~$ zfs send -nP -i tank/prod@replica_2025-12-21 tank/prod@replica_2025-12-22
size 14839210496
incremental tank/prod@replica_2025-12-21 tank/prod@replica_2025-12-22
Meaning: About 13.8 GiB of logical stream data (not necessarily wire bytes).
Decision: If that’s larger than expected, investigate churn (VM images, databases, recordsize mismatch) before scheduling replication during peak.
Task 10: Dry-run receive to validate permissions and target dataset
cr0x@server:~$ zfs receive -nvu backup/prod
would receive incremental stream of tank/prod@replica_2025-12-22 into backup/prod
would destroy snapshots: none
would overwrite: none
Meaning: -n is no-op, -v verbose, -u don’t mount. This confirms the receiver understands the stream and what it would do.
Decision: If this fails, do not “just run it anyway.” Fix naming, permissions, and feature mismatches first.
Task 11: Perform an incremental send with resilience
cr0x@server:~$ zfs send -v -i tank/prod@replica_2025-12-21 tank/prod@replica_2025-12-22 | ssh backup01 zfs receive -uF backup/prod
send from tank/prod@replica_2025-12-21 to tank/prod@replica_2025-12-22 estimated size is 13.8G
total estimated size is 13.8G
TIME SENT SNAPSHOT
02:00:12 13.8G tank/prod@replica_2025-12-22
Meaning: -F can roll back the destination to match; it’s useful but dangerous.
Decision: Use -F only for dedicated replication targets where rollback won’t destroy local changes. If humans write there, remove -F and fix divergence properly.
Task 12: Inspect a receive failure and map it to compatibility
cr0x@server:~$ zfs receive -u backup/prod
cannot receive incremental stream: most recent snapshot of backup/prod does not match incremental source
Meaning: The base snapshot on destination is different than the one you’re sending from (or was rolled back/renamed).
Decision: Find the newest common snapshot, or reseed with a full send. Don’t force it with rollback unless you’re sure the destination is disposable.
Task 13: Verify holds to prevent snapshot deletion mid-stream
cr0x@server:~$ zfs holds tank/prod@replica_2025-12-22
NAME TAG TIMESTAMP
tank/prod@replica_2025-12-22 repl Fri Dec 22 02:00 2025
Meaning: A hold prevents automated cleanup from deleting the snapshot you still need.
Decision: If you run snapshot pruning, use holds (or bookmarks) so your incremental chain doesn’t get “cleaned up” into failure.
Task 14: Use bookmarks for long-lived incremental anchors
cr0x@server:~$ zfs bookmark tank/prod@replica_2025-12-22 tank/prod#bkm_2025-12-22
cr0x@server:~$ zfs list -t bookmark -o name,creation tank/prod | tail -1
tank/prod#bkm_2025-12-22 Mon Dec 22 02:00 2025
Meaning: Bookmarks are lightweight references that allow incremental sends without keeping the full snapshot forever.
Decision: If retention pressure is high, move to bookmarks. But test receiver support; older systems may not handle bookmark-based incrementals well.
Task 15: Confirm mountpoint and canmount behavior before using -R
cr0x@server:~$ zfs get -o name,property,value -H mountpoint,canmount tank/prod
tank/prod mountpoint /prod
tank/prod canmount on
Meaning: If you replicate properties, you might mount production paths on a backup server. That’s not a “whoops”; that’s an outage.
Decision: On the destination, use zfs receive -u and set safe mountpoints. Consider stripping or overriding properties.
Task 16: Confirm destination dataset is not mounted (or mount safely)
cr0x@server:~$ zfs get -o name,property,value -H mounted,mountpoint backup/prod
backup/prod mounted no
backup/prod mountpoint /backup/prod
Meaning: Good: it won’t surprise-mount over something.
Decision: Keep replication targets unmounted by default; mount only when you need to read or restore.
Choosing the right send flags (and when not to)
Start conservative, then add power
When sending to an older receiver, your goal is boring compatibility, not cleverness. The most compatible approach is typically:
a full send for seeding, then incremental sends, with minimal fancy flags.
Raw vs non-raw with encryption
If the dataset is encrypted and you want to preserve it as-is (same ciphertext, same keys not exposed), you want raw send. Depending on your platform,
that’s usually zfs send -w (raw) and receiving on a destination that supports encrypted receive.
cr0x@server:~$ zfs send -w tank/prod@replica_2025-12-22 | ssh backup01 zfs receive -u backup/prod
cannot receive: stream is encrypted but encryption feature is disabled
Meaning: Destination can’t accept encrypted/raw stream because its pool or implementation lacks encryption support.
Decision: Upgrade destination OpenZFS and pool features, or switch to non-raw send (accepting plaintext transfer) and re-encrypt on destination.
Compressed send: great until it isn’t
Compressed send can reduce bandwidth and CPU in some cases, but it depends on receiver support. On mixed versions, treat it as an optimization you earn,
not a default you assume.
Replication streams (-R) and property landmines
-R is for replicating a subtree with snapshots, properties, and descendant datasets. It’s perfect for DR targets.
It’s also a nice way to replicate a bad mountpoint into the wrong place.
cr0x@server:~$ zfs send -R tank/prod@replica_2025-12-22 | ssh backup01 zfs receive -u backup/tank
receiving full stream of tank/prod@replica_2025-12-22 into backup/tank/prod
Meaning: You’re recreating the structure on destination. Properties may come along for the ride.
Decision: If destination is not a mirror environment, avoid -R. Or use it, but immediately override dangerous properties on the destination.
Force receive (-F): sometimes required, often abused
zfs receive -F rolls back the destination to match the stream. It fixes divergence. It also deletes snapshots and local changes that conflict.
In corporate terms, -F is a layoff: effective, quick, and you’d better have approvals.
Dataset compatibility: recordsize, volblocksize, and large blocks
The dataset properties themselves can cause performance surprises even if the stream receives fine. A newer source with large recordsize and tuned settings
can replicate to an older destination that technically accepts it, but then your restore performance is weird and your assumptions break.
Joke #2: If you want excitement, do database replication over a flaky VPN; if you want sleep, do a full send over a weekend and bring snacks.
Three corporate mini-stories from the replication trenches
Incident: a wrong assumption about “version compatibility”
A mid-sized company was migrating from older FreeBSD-based storage to a Linux appliance running a newer OpenZFS. The plan looked simple:
seed the new box, cut over NFS exports, retire old hardware. Someone asked, “Are ZFS versions compatible?” and got a confident “Yes, ZFS is ZFS.”
That sentence cost them a weekend.
The source pool had several active feature flags (bookmarks, embedded_data, large_blocks). The destination system’s OpenZFS did support some, but the pool
on the appliance had never been upgraded and was running with a conservative feature set because it shipped that way. During initial tests, they only replicated
a small dataset without the newer features being activated. In production, the main dataset state required them.
The receive failed halfway through the seed stream. The error message wasn’t helpful; it looked like a generic receive failure. They retried, got different errors,
and then fell into the classic trap: “Maybe it’s the network.” It wasn’t. It was the pool contract.
The fix was boring: explicitly upgrade the destination pool to support the required feature flags, then re-seed. But the re-seed meant another long transfer,
and the cutover window was already booked with other teams. They ended up doing a partial migration and dragging old hardware around for weeks.
The lesson wasn’t “upgrade everything.” The lesson was: inventory features before moving bytes, and treat pool upgrades as change-managed operations
with rollback plans (which usually means “replicate elsewhere,” not “undo”).
Optimization that backfired: aggressive flags on a mixed fleet
Another shop had a fleet of remote offices, each with a small ZFS box replicating back to a central DR cluster. Bandwidth was limited and expensive. An engineer
decided to “make the stream smaller” by enabling compressed send and recursion everywhere, and by using force receive to keep things aligned.
It worked at first, which is the most dangerous state for a change. Over time, a few offices upgraded ZFS as part of OS refresh cycles, while others stayed behind
due to application constraints. The replication script didn’t care; it used the same flags for all.
A subset of receivers started failing intermittently on one dataset class—VM zvols. The failures coincided with a change in how those systems were creating snapshots
and in what the streams contained. The script, running with zfs receive -F, began rolling back and reapplying state repeatedly. The data wasn’t corrupted,
but recovery point objectives quietly got worse because the pipeline was spending time fighting itself.
They eventually discovered two things: first, compressed send wasn’t uniformly supported across their receiver versions for the specific stream forms being generated;
second, -F was masking the underlying “missing common snapshot” problems by constantly rewriting destination history, which made incremental chains fragile.
The fix was to split the fleet into compatibility tiers. Conservative streams for older receivers, modern streams for newer ones, and -F removed except for
dedicated targets that were explicitly treated as disposable mirrors. Bandwidth went up a bit. Sleep improved a lot.
Boring but correct practice that saved the day: feature inventory + staged seeding
A financial services team planned a datacenter move. They had ZFS on both ends, but the destination was managed by a different group with different upgrade cadence.
Instead of rushing into stream flags, they did a simple discipline: inventory pool features and dataset properties, then agree on a compatibility baseline.
They created a staging pool on the destination sized to accept full seeds, and they ran dry-run receives during business hours to verify that streams would parse and that
no destructive actions were implied. They used zfs send -nP estimates to schedule large transfers, and they used snapshot holds so automated snapshot cleanup
couldn’t break incrementals during the cutover window.
During the actual move, the network had a bad day. Transfers slowed and stalled. But because they had already seeded most datasets and verified incremental compatibility,
they only needed small delta streams to cut over. The slow network became an annoyance instead of a crisis.
Nothing heroic happened. That’s the point. The correct practice looked dull in status meetings, but it reduced the number of unknowns to nearly zero.
Fast diagnosis playbook
When replication breaks or is slow, you want a tight loop: identify whether it’s a compatibility failure, a history mismatch, or a throughput bottleneck.
Check these in order. Don’t skip ahead to tuning flags.
First: is it a compatibility failure?
- On destination, try a dry-run receive:
zfs receive -nvu target/ds. If parsing fails, it’s stream compatibility or permissions. - Compare feature flags:
zpool get feature@* srcpoolvszpool get feature@* dstpool. - Check encryption status:
zfs get encryption,keystatus. If you’re using raw send, destination must support encryption features.
Second: is it an incremental history mismatch?
- Confirm both sides have the base snapshot:
zfs list -t snapshot | grep. - Check for destination divergence: did someone take local snapshots, roll back, or destroy snapshots?
- If you use bookmarks, verify they exist on both ends (or that your send uses snapshot-to-snapshot, not bookmark-to-snapshot).
Third: is it a throughput bottleneck?
- Estimate stream size:
zfs send -nP. If it’s huge, the “bottleneck” might be churn, not the network. - Check CPU and compression: if you’re compressing on the fly externally, CPU can be your limiter.
- Observe network and SSH: if you’re using SSH, cipher choice and single-threading can cap throughput.
- Check pool performance: destination pool might be fragmented, busy, or have sync settings constraining writes.
This playbook is intentionally unglamorous. In production, the fastest fix is usually “correct assumption” rather than “clever tuning.”
Common mistakes: symptom → root cause → fix
1) “cannot receive: unsupported feature or stream”
Symptom: Receive fails immediately; error mentions unsupported feature/stream, or just “invalid stream”.
Root cause: Destination receiver doesn’t understand the stream record types, or destination pool can’t support required features.
Fix: Compare zpool get feature@* and adjust. Upgrade destination OpenZFS/pool, or send a more conservative stream (avoid raw/compressed/replication flags until verified).
2) “most recent snapshot does not match incremental source”
Symptom: Incremental send fails; destination has snapshots but not the one you expected.
Root cause: Lost common base snapshot due to pruning, renames, manual destroys, or rolling back destination.
Fix: Find newest common snapshot and send incrementals from that. If none exists, reseed with a full send. Add holds or bookmarks to prevent recurrence.
3) Replication “succeeds” but destination mounts over something critical
Symptom: Suddenly weird filesystem content on destination; services reading wrong data; mountpoints changed.
Root cause: Replicated properties (often via -R) applied mountpoint/canmount/shares.
Fix: Always receive with -u into safe paths; explicitly set mountpoint/canmount after receive. Avoid replicating properties unless destination is a true mirror.
4) Raw encrypted send fails on destination
Symptom: Errors about encryption feature disabled or key problems.
Root cause: Destination OpenZFS/pool does not support encryption feature, or keys aren’t handled correctly for the receive mode.
Fix: Upgrade destination to support encryption and enable required pool features, or do decrypted migration and re-encrypt on destination with new keys.
5) “zfs receive” is slow and pegs CPU
Symptom: Network is idle-ish; CPU is hot; replication crawls.
Root cause: You’re doing heavy compression/encryption in userland (SSH cipher, external compression), or destination pool is bottlenecked.
Fix: Reduce userland overhead (choose sane SSH ciphers, avoid unnecessary compression), check destination pool IOPS, and avoid layering compression on already-compressed data.
6) Incrementals randomly break after “cleanup jobs”
Symptom: Works for days, then fails after snapshot pruning runs.
Root cause: Retention policy deletes the base snapshot needed for incrementals.
Fix: Use snapshot holds for replication anchors or switch to bookmarks; align retention with replication frequency and lag.
7) Destination space usage explodes after migration
Symptom: Same logical data, more consumed space on destination.
Root cause: Different compression settings, recordsize differences, special vdev behavior, or destination not receiving compressed blocks as expected.
Fix: Check zfs get compression,recordsize and pool layout differences. Don’t assume identical physical usage across pools; validate with small samples first.
Checklists / step-by-step plan
Plan A: Safe migration from newer source to older destination
-
Inventory features and encryption.
Run on source and destination:zpool get feature@*,zfs get encryption,keystatus.
Decide: upgrade destination vs accept decrypted migration. -
Create a dedicated target dataset.
Keep it unmounted: receive with-u. Decide: do you need a mirror (-R) or a single dataset? -
Dry-run receive.
Usezfs receive -nvuto catch parsing/permissions issues without writing. -
Seed with a full send.
Avoid fancy flags initially. Validate that the destination snapshot exists after receive. -
Move to incremental sends.
Use consistent snapshot naming; protect anchors with holds or bookmarks. -
Cutover.
Final incremental, verify application-level integrity, then switch clients.
Plan B: Building a replication standard across mixed ZFS versions
- Define compatibility tiers. Group systems by receiver capability; don’t run a single “universal” flag set.
- Pick a default conservative stream. Only add raw/compressed/replication recursion after validation per tier.
- Standardize snapshot and bookmark naming. Your future self needs deterministic matching, not creativity.
- Automate validation. Periodically confirm common snapshots exist and that dry-run receives succeed.
- Document pool upgrade policies. Pool upgrades are irreversible; treat them as change events with approval.
Plan C: When incrementals are broken and you need service back
- Stop deleting snapshots. Pause retention jobs.
- Find newest common snapshot. If it exists, resume incrementals from there.
- If no common snapshot exists, reseed. Full send of latest snapshot; then reestablish incremental chain.
- After recovery, implement holds/bookmarks. Prevent repeat outages caused by cleanup.
Extra operational guardrails (do these, thank yourself later)
- Replication targets should be treated as cattle: dedicated, non-interactive, and safe to roll back.
- Never replicate mountpoints into a host that also runs applications unless you have explicit property controls.
- Test one dataset end-to-end, including restore, before you schedule “the big migration.”
FAQ
1) Can a newer OpenZFS always send to an older OpenZFS?
No. The receiver must understand the stream format, and the destination pool must support any required features. “ZFS is ZFS” is not a contract.
2) What’s the biggest indicator of incompatibility?
Active pool features on the source that the destination does not support. Check zpool get feature@* and look for “active” on source and “disabled/unsupported” on destination.
3) If the destination pool supports a feature but it’s not enabled, will receive enable it?
It can, depending on implementation and the structures written. In practice, you should explicitly manage pool upgrades/enables so you don’t “accidentally” activate irreversible features during a migration.
4) Is raw send required for encrypted datasets?
Required if you want to preserve encryption as ciphertext end-to-end. If you don’t use raw, you may be sending plaintext (even if the source dataset is encrypted), and that’s a security decision.
5) Why does incremental send complain about snapshots not matching when names look right?
Names aren’t identity. The destination’s snapshot might not be the same snapshot state (it could be from a different history), or the destination was rolled back. Incrementals require a shared ancestry chain.
6) Should I always use zfs send -R for replication?
Only if the destination is intended to be a faithful replica of a subtree, including properties and descendant datasets. Otherwise, -R can replicate “environment assumptions” you didn’t mean to copy.
7) When is zfs receive -F appropriate?
For dedicated replication targets where rollback is acceptable and expected. Not for shared datasets, not for anything humans modify, and not as a band-aid for broken snapshot retention.
8) How do bookmarks help with compatibility and retention?
Bookmarks allow incremental sends without keeping every old snapshot. They reduce retention pressure and prevent cleanup jobs from breaking incrementals—assuming both ends support bookmark-based sends.
9) Why does the estimated send size not match actual bytes transferred?
The estimate is logical stream size. Wire bytes depend on whether blocks are already compressed, whether the stream is compressed, and what SSH/network overhead adds.
10) Can I “downgrade” a pool to match an older system so replication works?
Not realistically in-place. Once features are active, you usually can’t go backwards. The practical downgrade is: create an older-compatible pool elsewhere and replicate into it using compatible streams.
Conclusion: practical next steps
If you take one operational stance from this: compatibility is not a vibe, it’s an inventory. You don’t “try a send” and hope. You check pool features,
dataset encryption state, receiver capability, and snapshot history first—then you move bytes.
Next steps you can do today:
- Run
zpool get feature@*on every replication endpoint and keep the outputs somewhere humans can compare. - Pick a conservative default replication mode for older receivers and enforce it per tier.
- Implement holds or bookmarks so retention can’t break your incremental chain.
- Practice restores, not just sends. Replication that can’t be received and mounted safely is just expensive streaming.