ZFS Feature Flag Portability: Avoiding “Cannot Import” Surprises

Was this helpful?

The most expensive ZFS error message isn’t a checksum mismatch or a resilver that takes all weekend.
It’s the one you get when the business is waiting and the pool won’t import: “unsupported features”.

If you’ve ever treated a ZFS pool like a portable hard drive—“we’ll just move the JBOD to the new host”—you’ve already
stepped on this rake. ZFS feature flags are great until you forget they exist. Then they’re a lock you installed yourself.

What feature flags actually are (and why they break imports)

ZFS “feature flags” are on-disk format capabilities tracked at the pool level. They replaced the old “pool version”
model where a single monotonically increasing version number tried to represent everything. Feature flags are more granular:
a pool can support some features and not others; different implementations can add new features without coordinating a global
version counter.

Here’s the portability catch: once a feature becomes active on a pool (or sometimes once it is merely enabled),
you may need a ZFS implementation that understands it to import the pool. If you move the disks to a host with older ZFS,
or a vendor build missing that feature, ZFS may refuse to import. This is not ZFS being petty. It’s preventing silent corruption.

Enabled vs active: the small detail that ruins weekends

ZFS tracks features in states that usually look like: disabled, enabled, active.
The exact presentation varies slightly by platform, but the idea is consistent:

  • Enabled: the pool is allowed to use the feature if needed.
  • Active: the pool has actually used it; on-disk structures exist that require the feature for correct interpretation.

Portability risk increases sharply when a feature becomes active. Sometimes “enabled” alone is enough to block import on older ZFS;
other times only “active” matters. You don’t want to learn which is which during a migration window.

Why you can’t just “turn it off”

Most feature flags are one-way doors. Once a pool has used a feature, removing it would require rewriting on-disk metadata
and possibly data structures across the pool. ZFS does not generally provide “feature downgrade” tools. The practical downgrade
path is usually: replicate data to a new pool created with older/compatible features.

That’s the first decision point this article will push you toward: portability is something you design and test. It’s not a property
you discover later by staring at zpool import errors.

The dry-funny reality check

Joke #1: A ZFS pool is portable in the same way a grand piano is “movable”. Yes, technically—just don’t act surprised when it needs planning.

One quote worth taping to your monitor

“Hope is not a strategy.” — General Gordon R. Sullivan

Treat feature-flag compatibility exactly like you treat backups: if you haven’t tested it, you don’t have it.

Facts and short history: how we got here

The details matter because people keep repeating old ZFS folk wisdom that was only true for a specific era.
Here are concrete facts and context points that will help you reason about portability today.

  1. Pool “versions” were a single counter. Early ZFS used a pool version number; upgrading bumped it and could block older imports.
    Feature flags were introduced to avoid that all-or-nothing upgrade model.
  2. Feature flags are recorded on disk. This isn’t a runtime toggle; it’s part of what makes a pool self-describing across reboots.
    That’s great for recovery—until you change the description to a dialect your other hosts don’t speak.
  3. OpenZFS unified what used to be split. Solaris/Illumos-derived ZFS and the various downstreams diverged; OpenZFS made feature flag
    coordination more realistic, but not perfect across vendor packaging.
  4. Linux ZFS (now OpenZFS on Linux) popularized mixed fleets. Many orgs now run ZFS on Linux, FreeBSD, and appliances at the same time,
    and portability across them is a daily operational concern, not a curiosity.
  5. “zpool upgrade” changes the contract. On many systems, zpool upgrade doesn’t just enable features; it can enable a set
    that older systems can’t import. People treat it like “apply security updates” and regret it.
  6. Some features are dataset-scoped, some pool-scoped, but import gating is pool-scoped. If the pool needs a feature, the whole pool import
    is affected even if only one dataset used it.
  7. Encryption changed portability discussions. Native ZFS encryption introduced key management and keystore integration differences across OSes,
    plus feature flags required for encrypted datasets.
  8. Vendor appliances sometimes lag. Storage appliances may ship “ZFS-based” systems with conservative feature sets or delayed OpenZFS versions,
    which makes them the least compatible member of your fleet.
  9. Read-only import can still fail. Some people assume zpool import -o readonly=on will bypass feature requirements. It won’t if
    the on-disk format can’t be interpreted safely.

The punchline: feature flags are the right engineering solution, but they require operational discipline. If your process is “upgrade when prompted,”
you’re basically letting your pools write checks your other machines can’t cash.

The failure modes that lead to “cannot import”

1) The pool was upgraded on one host, then moved to an older host

Classic scenario: you have a pool that’s sometimes imported on Host A (new OpenZFS) and sometimes on Host B (older). Someone runs
zpool upgrade on Host A because it looks like a harmless maintenance step. Now Host B can’t import.

2) A feature became active because a dataset property was changed

Certain dataset properties can activate pool features indirectly. You didn’t run an upgrade; you “just enabled xattr=sa” or “enabled encryption”
or “enabled special_small_blocks.” Those can activate features. Now your DR system can’t import, or your replication target refuses to receive.

3) Heterogeneous OpenZFS versions in a fleet, with inconsistent packaging

Even within “OpenZFS,” distros and vendors can ship different versions, backports, or feature sets. You can have two “2.1.x” systems where one supports
a feature and the other doesn’t due to patch selection. Assuming semantic version parity equals feature parity is a great way to write incident reports.

4) Someone used a convenience snapshot/replication path that dragged features along

Raw send for encrypted datasets, large block support, embedded data—these can tie your replication compatibility to feature support. You can replicate data
and still get stuck if the receiver can’t support the on-disk format implied by the send stream.

5) Pool was created with “compatibility” ignored

Modern OpenZFS has a compatibility property (on some platforms) that can constrain features at pool creation to match a target set.
If you don’t set it and later discover you needed it, you’re in rebuild territory.

Fast diagnosis playbook

When a pool won’t import and people are watching, you don’t have time to rediscover ZFS. You need a quick triage path that narrows
“unsupported feature flags” vs “device/pathing” vs “damage.” Here’s the order that tends to get you to truth fastest.

First: confirm it’s a feature flag problem, not missing disks

  • Run zpool import and read the full output. If it says “unsupported feature(s),” stop guessing and start collecting.
  • Check that the OS sees all disks (by-id paths). If the pool is incomplete, you can get misleading noise.

Second: identify which features are required and what ZFS you’re running

  • Get the local ZFS implementation and version info (zfs version on Linux; uname/kldstat and package info on BSD).
  • Extract the feature list needed by the pool from the import output, then compare to local supported features.

Third: decide whether you can import anywhere, even temporarily

  • Find a host (or boot media) with a ZFS version that supports the required features.
  • If you can import there, your options open: replicate off, roll back to pre-activation snapshots (sometimes), or keep the pool on the newer host.

Fourth: choose a safe recovery path

  • If the pool must be portable to older systems, plan a migration: create a new pool with constrained features and zfs send | zfs receive.
  • If the old system must remain, upgrade its ZFS (if supported and safe) rather than forcing a downgrade of the pool (usually not possible).

Joke #2: The phrase “we’ll just import it on the other box” has ended more change windows than caffeine ever did.

Practical tasks: commands, outputs, decisions

These tasks are written like you’re on-call: command, realistic output, and what decision you make from it.
Examples assume Linux with OpenZFS utilities installed. The same concepts apply on FreeBSD/Illumos, but some commands differ.

Task 1: List importable pools and spot “unsupported features” immediately

cr0x@server:~$ zpool import
   pool: data
     id: 13462978123456789012
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to missing or damaged devices.
   see: zpool import -F for recovery
config:

        data        UNAVAIL  insufficient replicas
          raidz1-0  UNAVAIL  insufficient replicas
            sda     UNAVAIL  cannot open
            sdb     UNAVAIL  cannot open
            sdc     UNAVAIL  cannot open

   pool: backup
     id: 998877665544332211
  state: UNAVAIL
status: The pool uses the following feature(s) not supported on this system:
        com.delphix:spacemap_histogram
        com.delphix:extensible_dataset
action: The pool cannot be imported. Access the pool on a system that supports
        the required feature(s), then migrate the data.
config:

        backup      UNAVAIL  unsupported feature(s)

Decision: “data” looks like device visibility/replica trouble. “backup” is a feature-flag compatibility issue.
Don’t waste time swapping cables for “backup”; go find a newer ZFS host.

Task 2: Attempt a read-only import (to confirm it still fails on features)

cr0x@server:~$ sudo zpool import -o readonly=on backup
cannot import 'backup': unsupported version or feature
unsupported feature(s): com.delphix:spacemap_histogram com.delphix:extensible_dataset

Decision: Read-only doesn’t help. You need a system that supports those features, or you need to upgrade this system’s ZFS.

Task 3: Check what features your current ZFS supports

cr0x@server:~$ zpool get -H -o property,value all | grep feature@
feature@async_destroy                         enabled
feature@empty_bpobj                           active
feature@lz4_compress                          active
feature@spacemap_histogram                    -
feature@extensible_dataset                    -
feature@embedded_data                         -
feature@bookmarks                             enabled

Output meaning: Features with - are unknown/not supported by this implementation.

Decision: If the pool requires spacemap_histogram and you show -, import will never work here.
Stop and choose: upgrade ZFS here, or move disks to a host that already supports it.

Task 4: Identify local OpenZFS version (Linux)

cr0x@server:~$ zfs version
zfs-2.1.5-1
zfs-kmod-2.1.5-1

Decision: Now you can compare this version to the version that last imported the pool. If the pool was last imported on 2.2+,
you should assume feature mismatch until proven otherwise.

Task 5: See which host last imported the pool (helps find the “newer” box)

cr0x@server:~$ zdb -C backup | head -n 25
MOS Configuration:
        version: 5000
        name: 'backup'
        state: 0
        txg: 925331
        pool_guid: 998877665544332211
        hostid: 0x3f2a9c10
        hostname: 'stor-ops-07'
        vdev_tree:
            type: 'root'
            id: 0
            guid: 1234567890123456789

Output meaning: The pool records the hostname/hostid that last wrote to it.

Decision: Go find stor-ops-07 (or its replacement). That’s your best bet for a compatible import environment.

Task 6: On the “newer” host, list feature states for the pool

cr0x@server:~$ sudo zpool get -H -o property,value all backup | grep feature@
feature@async_destroy                         enabled
feature@empty_bpobj                           active
feature@lz4_compress                          active
feature@spacemap_histogram                    active
feature@extensible_dataset                    active
feature@embedded_data                         active
feature@bookmarks                             enabled

Output meaning: Those Delphix-origin features are now active, so the pool truly depends on them.

Decision: Downgrading isn’t realistic. Your path is upgrade the old system or migrate data to a new compatible pool.

Task 7: Confirm whether a pool has a compatibility profile (if supported)

cr0x@server:~$ sudo zpool get -H compatibility backup
compatibility    off

Decision: If you have mixed OS targets, “off” is a smell. For new pools you should set a compatibility profile aligned with
the oldest system you must import on.

Task 8: Create a new “portable” pool with constrained features (example pattern)

cr0x@server:~$ sudo zpool create -o ashift=12 -o compatibility=openzfs-2.1 portable mirror /dev/disk/by-id/wwn-0x5000c500a1b2c3d4 /dev/disk/by-id/wwn-0x5000c500a1b2c3d5
cannot set property for 'portable': invalid property 'compatibility'

Output meaning: Your current tooling doesn’t support the property (older OpenZFS, or vendor build).

Decision: Don’t improvise. If you can’t use a compatibility profile, you must enforce portability by policy:
avoid zpool upgrade, avoid activating risky features, and validate feature lists before migration.

Task 9: Show pool status and confirm you’re not also dealing with missing vdevs

cr0x@server:~$ sudo zpool status -v backup
  pool: backup
 state: ONLINE
status: Some supported features are not enabled on the pool.
action: The pool can be upgraded using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
config:

        NAME                                            STATE     READ WRITE CKSUM
        backup                                           ONLINE       0     0     0
          raidz2-0                                       ONLINE       0     0     0
            /dev/disk/by-id/wwn-0x5000c50011111111      ONLINE       0     0     0
            /dev/disk/by-id/wwn-0x5000c50022222222      ONLINE       0     0     0
            /dev/disk/by-id/wwn-0x5000c50033333333      ONLINE       0     0     0
            /dev/disk/by-id/wwn-0x5000c50044444444      ONLINE       0     0     0

Decision: Resist the temptation to run zpool upgrade because it’s suggested.
That message is ZFS politely offering you a portability foot-gun.

Task 10: Audit dataset properties that tend to activate non-portable features

cr0x@server:~$ sudo zfs get -r -o name,property,value -s local encryption,keystatus,special_small_blocks,xattr,recordsize backup
NAME                     PROPERTY              VALUE
backup                    encryption            off
backup                    keystatus             -
backup                    special_small_blocks  0
backup                    xattr                 sa
backup                    recordsize            128K
backup/vm                 encryption            aes-256-gcm
backup/vm                 keystatus             available
backup/vm                 special_small_blocks  16K
backup/vm                 xattr                 sa
backup/vm                 recordsize            16K

Output meaning: You have encrypted datasets and special allocation behavior. These are common triggers for feature activation.

Decision: If the receiver/DR site can’t support encryption or special vdev features, you need a migration plan that avoids them
(new dataset layout, or upgrade the receiver).

Task 11: Determine whether your replication stream will require newer features (dry run)

cr0x@server:~$ sudo zfs send -nP backup/vm@daily-2025-12-25
size	26.4G
incremental	backup/vm@daily-2025-12-24	backup/vm@daily-2025-12-25

Decision: A dry run confirms the stream exists and gives size. Next, decide whether you must use raw send for encryption
(which implies feature support on the receiver) or whether you can replicate plaintext (which changes your threat model).

Task 12: Use a controlled receive target and test import on the “older” system

cr0x@server:~$ sudo zfs send -R backup/vm@daily-2025-12-25 | ssh dr-old 'sudo zfs receive -uF drpool/test-vm'
receiving full stream of backup/vm@daily-2025-12-25 into drpool/test-vm@daily-2025-12-25
received 26.4G stream in 00:12:41 (35.4M/sec)

Output meaning: The receive succeeded, and the dataset is not mounted (-u), which is good for controlled validation.

Decision: Now test whether drpool can be exported/imported on that older system without feature errors.
If it can, you have a portability-compatible path: migrate by replication, not by moving disks.

Task 13: Check feature activation drift over time (catch “someone enabled it”)

cr0x@server:~$ sudo zpool get -H -o property,value all backup | awk -F'\t' '/feature@/ && $2=="active"{print $1}' | sort | head
feature@allocation_classes
feature@embedded_data
feature@extensible_dataset
feature@spacemap_histogram

Decision: Put this into monitoring or a daily report. If the list changes, treat it like schema drift in a database:
investigate who/what changed behavior, and whether portability assumptions still hold.

Task 14: Confirm the “older” host’s supported features before you promise anything

cr0x@server:~$ zpool get -H -o property,value all | grep feature@ | head
feature@async_destroy                         enabled
feature@empty_bpobj                           active
feature@lz4_compress                          active
feature@spacemap_histogram                    -
feature@extensible_dataset                    -
feature@embedded_data                         -

Decision: If you see - for features that are active on the pool you need to move, stop.
Either upgrade this host or accept that it is not a valid import target.

Task 15: Export safely before moving hardware (avoid “in use” confusion)

cr0x@server:~$ sudo zpool export backup
cr0x@server:~$ zpool import | grep -A2 'pool: backup'
   pool: backup
     id: 998877665544332211
  state: ONLINE

Output meaning: Export cleanly detaches the pool so the next import is a deliberate action, not a messy takeover.

Decision: Make “export before move” a change-control requirement. It reduces recovery ambiguity and makes later diagnostics less cursed.

Task 16: If you must import a pool that was “in use,” use safe import flags cautiously

cr0x@server:~$ sudo zpool import -f backup
cannot import 'backup': unsupported version or feature

Decision: -f can override “in use” checks. It cannot override “unsupported feature.”
If someone suggests “try -f again,” you have permission to end the meeting.

Three corporate mini-stories from the trenches

Mini-story 1: The incident caused by a wrong assumption

A mid-sized company had two storage servers: one in production, one in a lab that doubled as “DR if we squint.”
The team treated their ZFS pool as portable infrastructure: same chassis model, same HBA, same cabling, so moving the disks should work.
They’d done it once, months earlier. It “worked,” so it became a plan.

During a maintenance weekend, the production host was replaced with a newer OS image. Someone noticed a harmless-looking warning:
“Some supported features are not enabled on the pool.” They ran zpool upgrade to clean it up.
The pool stayed online. Nothing exploded. Everyone went home early and told themselves they were responsible adults.

Two weeks later, production had a motherboard failure. The team rolled the disks into the lab host to bring services back.
Import failed with “unsupported feature(s).” The lab host’s ZFS stack was older; it didn’t know the features that had been activated.
The plan wasn’t wrong because ZFS is finicky. The plan was wrong because it relied on an undocumented assumption: “ZFS pools are portable across our hosts.”

The fix was painfully unglamorous: they upgraded the lab host’s ZFS to match production (after validating kernel module support),
imported the pool, and then scheduled a real DR rebuild where replication, not disk shuffling, was the recovery method.
The best part: the postmortem showed the initial zpool upgrade wasn’t needed for any business feature.
It was a cosmetic action that turned into an outage multiplier.

Mini-story 2: The optimization that backfired

A data engineering group ran a ZFS-backed analytics cluster. They loved performance tweaks the way some people love new kitchen knives.
One engineer enabled a few properties to improve metadata performance and small I/O: xattr=sa, smaller records for VM-like datasets,
and later a special vdev for metadata and small blocks. Benchmarks looked great. The cluster’s p99 latency charts finally stopped embarrassing them.

Then they decided to replicate the pool to a smaller, cheaper DR system. The DR server was a different OS distribution and lagged behind in OpenZFS version.
Initial replication worked for plain datasets. The moment they included the “optimized” datasets, the receive side started failing in odd ways.
Even where replication succeeded, the DR import testing failed: unsupported features tied to allocation classes and special vdev behavior.

What backfired wasn’t the optimization itself—it was the organizational assumption that DR had the same capabilities as production.
The engineer had improved production performance by enabling features that effectively raised the minimum ZFS dialect required to interpret the pool.
DR wasn’t built to that dialect.

Their long-term fix was to stop treating DR as an afterthought and to standardize ZFS feature support across tiers.
Short-term, they built a compatibility-constrained pool at DR and changed replication scope: only send datasets that matched the DR profile,
and keep the “fancy” datasets on a separate pool with a DR plan that included upgrading the target. The backfired optimization became an architecture lesson:
performance features are also portability decisions.

Mini-story 3: The boring but correct practice that saved the day

A SaaS company had a policy that annoyed everyone: storage pools were tagged with a “minimum import environment.”
Before any ZFS upgrade or pool feature change, the on-call engineer had to run a quick compatibility audit and paste the output into the change request.
It felt like bureaucratic theater. Until it didn’t.

During an urgent hardware refresh, an engineer noticed the new hosts were on a newer OpenZFS release and wanted to “upgrade everything while we’re here.”
The change template required them to list feature flags that would become enabled and compare to the oldest rollback host.
The audit showed that upgrading would activate features not supported on the fallback system.

They made the unpopular choice: do not run zpool upgrade during the refresh. Services migrated fine.
Two days later, a regression in a network driver on the new hosts forced a temporary rollback to the older environment.
Because the pools were not upgraded, the rollback was a standard import/export dance, not a feature-flag hostage situation.

The incident never became a customer-facing outage, and nobody threw a party. That’s the point.
The practice was boring, slightly annoying, and exactly what reliability looks like in real companies.

Common mistakes (symptom → root cause → fix)

1) Symptom: “cannot import ‘POOL’: unsupported version or feature”

Root cause: The pool has active features the current host doesn’t support.

Fix: Import on a host with a newer/matching OpenZFS that supports those features; then migrate via zfs send/receive to a compatible pool, or upgrade the old host’s ZFS.

2) Symptom: “unsupported feature(s): …” list contains a few com.delphix:* entries

Root cause: The importing system is too old (or has a ZFS build without those features), common on older appliances or conservative distros.

Fix: Standardize on a minimum OpenZFS baseline across systems that must share pools, or stop moving disks and switch to replication-based migration.

3) Symptom: You can import on production but not on DR

Root cause: DR is behind on ZFS features; a dataset property change activated features in production.

Fix: Either upgrade DR to match production features, or redesign replication to land on a compatibility-constrained pool created for DR’s capabilities.

4) Symptom: Someone says “just run zpool upgrade, it’s recommended”

Root cause: Confusing “recommended” with “required.” ZFS is telling you new features exist, not that you need them.

Fix: Make zpool upgrade a controlled change with an explicit compatibility impact assessment and rollback plan.

5) Symptom: After enabling encryption, older hosts can’t import or receive replication

Root cause: Encryption relies on feature flags and also introduces key handling differences; replication may require raw send and compatible features.

Fix: Validate encryption feature support on all import/receive endpoints before enabling. If you must support older endpoints, encrypt at a different layer or upgrade endpoints.

6) Symptom: Pool imports, but tooling behaves oddly across systems

Root cause: Mixed userland/kernel module versions, or partial feature support/backports.

Fix: Align ZFS userland and kernel module versions; treat ZFS as a unit, not a menu of packages.

7) Symptom: “zpool import” shows the pool but state is UNAVAIL with “unsupported feature(s)” and also missing devices

Root cause: Two problems at once: incomplete vdev visibility plus feature mismatch. People chase the wrong one first.

Fix: Confirm disk visibility first (by-id paths), then handle feature support. If features are unsupported, no amount of cabling fixes will help.

8) Symptom: Replication succeeds, but testing import of the received pool fails elsewhere

Root cause: You proved send/receive between two systems, not portability to the third system. The received pool may have activated features too.

Fix: Add an import test step on the oldest required platform as a gate in the migration process.

Checklists / step-by-step plan

Checklist A: Before you migrate a pool by moving disks

  1. Identify the oldest system that must import the pool.
    If you can’t name it, assume it’s older than you want.
  2. On the source system, inventory active features.
    Capture zpool get all | grep feature@ output and store it with the change record.
  3. On the destination system, inventory supported features.
    Any - for a feature that is active on the pool means “no import.”
  4. Do a controlled export.
    Export before moving hardware to avoid dirty “in use” states and ambiguous ownership.
  5. Have a “known-good import host” ready.
    A rescue environment with a recent OpenZFS can save you when the destination is too old.

Checklist B: When building a pool intended to be portable

  1. Define a portability baseline. Example: “must import on OpenZFS 2.1 across Linux and FreeBSD” (whatever your fleet actually runs).
  2. Use compatibility profiles if your platform supports them. If not, document “do not upgrade features” and enforce via process.
  3. Separate experimental features into separate pools. Treat it like running beta schema migrations: isolate blast radius.
  4. Standardize ZFS versions across environments. Production and DR should not run different major feature sets unless you like risk.
  5. Automate feature drift detection. If active features change, alert and investigate.

Checklist C: Migration plan that avoids feature-flag portability traps (recommended)

  1. Create a new target pool that matches your required baseline.
    Use compatibility constraints if available; otherwise use a host with the baseline ZFS to create the pool and never upgrade it.
  2. Replicate with zfs send/receive and validate.
    Send datasets in stages; test mounts and application behavior.
  3. Test import on the oldest required system.
    Export/import the new pool on that system as a real gate, not a thought experiment.
  4. Cut over and keep the old pool read-only for a short window.
    Gives you a safer rollback than “reverse the entire migration.”
  5. Only then consider enabling new features.
    Treat feature activation like a compatibility-breaking API change.

Checklist D: What to do when you’re already stuck

  1. Don’t keep retrying random import flags. Unsupported features won’t yield to persistence.
  2. Find a compatible host (or boot environment) that supports the required features. Import there.
  3. Get the business online using the compatible host if necessary. Stabilize first; migrate later.
  4. Plan the migration to a portable pool via replication. This is the “downgrade” you actually have.
  5. After recovery, add guardrails. Feature drift monitoring and controlled upgrades prevent repeats.

FAQ

1) Are ZFS feature flags the same thing as “pool version”?

No. Pool versions were a single compatibility number. Feature flags are individual capabilities that can be enabled/active independently.
Portability issues still exist; they’re just more granular now.

2) If I never run zpool upgrade, am I safe?

Safer, not invincible. Some features can become active through normal operations or property changes (encryption, special allocation, etc.).
“No upgrades” is necessary in some environments, but it’s not sufficient without auditing.

3) Can I disable a feature flag after it becomes active?

Usually no. Most features are effectively irreversible on a live pool. The practical path is to migrate data to a new pool created with the desired feature set.

4) Does read-only import bypass unsupported features?

No. If the importing system can’t interpret on-disk structures safely, ZFS will refuse even a read-only import.

5) What’s the difference between “enabled” and “active” in feature listings?

Enabled means the pool is permitted to use the feature; active means the pool has actually used it and now depends on it.
Active is the bigger portability constraint, but enabled can also matter depending on the feature and implementation.

6) Can I replicate from a “new feature” pool to an older system?

Sometimes. zfs send/receive compatibility depends on which dataset features are represented in the stream and what the receiver supports.
Encryption and certain metadata features can force the receiver to support newer flags.

7) Why does the import error mention com.delphix:* features?

Those names come from early feature flag work in the broader ZFS ecosystem. The name doesn’t mean you’re using Delphix products.
It means your importing ZFS doesn’t understand features that your pool has used.

8) What’s the best operational rule for a mixed fleet?

Pick a baseline OpenZFS feature set and treat it like an API contract. Build pools to that baseline and don’t enable features that exceed it
unless you intentionally isolate them.

9) Is moving disks between Linux and FreeBSD safe if both say “OpenZFS”?

It can be, but only if the feature sets match. “OpenZFS” is a family name, not a guarantee of identical feature support.
Always compare supported features, not just OS branding.

10) What should I store in change records for ZFS portability?

At minimum: zfs version, the pool feature list (zpool get all | grep feature@), and the destination system’s supported feature list.
If you can’t show compatibility in text, you don’t have compatibility.

Conclusion: next steps you can do this week

Feature flags aren’t a flaw in ZFS. They’re a contract. You either manage that contract, or you violate it accidentally and ZFS refuses to play along.
The refusal is the filesystem doing you a favor—just at the worst possible moment.

  1. Inventory your fleet’s ZFS versions. Write down what runs where, including DR and lab boxes that suddenly become “production.”
  2. Pick a portability baseline. The oldest system you truly need to import on sets the ceiling for features.
  3. Implement a feature audit step before upgrades or migrations. Capture pool features and compare to target supported features.
  4. Stop moving pools as a default migration method. Prefer replication to a pool created with explicit compatibility constraints.
  5. Monitor feature drift. If the active feature set changes, treat it like a breaking change and investigate immediately.

If you do nothing else: stop running zpool upgrade because a status message suggested it. Upgrade deliberately, with a compatibility target,
or accept that you’re burning the bridge behind you. ZFS will remember. So will your on-call rotation.

← Previous
MariaDB vs Percona Server: Backup/Restore Speed on a Small VPS
Next →
Why CPUs Bottleneck GPUs at 1080p (and Not at 4K)

Leave a comment