ZFS zpool get feature@*: Reading Your Pool’s Real Capabilities

Was this helpful?

There’s a moment in every ZFS admin’s life when they realize the pool isn’t just “ZFS.” It’s a specific set of on-disk capabilities, negotiated between the code you’re running and the history of upgrades you’ve allowed. And the most honest way to see that reality is a command that looks like a typo until it saves your weekend: zpool get feature@*.

This is not a cosmetic inventory. Feature flags determine what metadata is written, what operations are possible, and—crucially—what other systems can import or replicate the pool. If you’ve ever asked “why won’t this pool import on the older host?” or “why did replication break after an upgrade?” you’ve already met feature flags. You just didn’t get properly introduced.

What feature@* really is (and why it matters)

ZFS feature flags are on-disk markers of specific capabilities. They replaced the older monolithic “pool version” scheme, where a single integer tried to represent an entire compatibility story. Feature flags are more granular: a pool might support one modern feature but not another. More importantly, a feature can be:

  • disabled (not available and not in use)
  • enabled (available for use; pool knows about it)
  • active (actually in use on disk; you’ve crossed the compatibility line)

The operator-grade nuance is this: “enabled” usually means you can still go back (in practical terms: you haven’t written incompatible metadata yet), while “active” means you likely cannot—at least not without destroying and recreating the pool or doing complex migrations. This is why a pool can import on an older box until one day it doesn’t: you didn’t “upgrade the pool,” you activated a feature by using something that required it.

Two more terms you’ll see in feature flag properties:

  • local: set directly on this pool (often via zpool upgrade or automatic behavior)
  • received: set because you received a replicated pool or certain metadata via send/receive (less common at pool-level than dataset-level, but it can show up depending on tooling and platform behavior)

Joke #1 (short, relevant): ZFS feature flags are like a corporate policy document—nobody reads them until they’re the reason you can’t “just roll back.”

Facts and history: how we got here

Storage people love to pretend we’re timeless, but ZFS has a very specific lineage. Here are some concrete facts and historical context points that change how you interpret feature@* in the real world:

  1. Pool versions were a dead end. Early ZFS tracked a single “version” number for the pool. Vendors diverged; compatibility became political as much as technical.
  2. Feature flags made compatibility composable. Instead of “version 28,” you get a list: spacemap improvements, allocation classes, encryption support, and so on.
  3. OpenZFS unified a fragmented ecosystem. Different operating systems shipped different ZFS implementations; feature flags were a survival mechanism for portability.
  4. “Enabled” vs “active” is operational gold. It lets you stage an upgrade without immediately committing to incompatible on-disk changes.
  5. Some features are one-way doors. Once “active,” you typically can’t revert without rebuilding. That’s not a ZFS quirk; it’s the cost of evolving metadata formats safely.
  6. Feature flags affect replication. zfs send streams may require that the target supports the features represented in the stream. This becomes visible when you mix OS versions or appliance vendors.
  7. Boot pools are special. On some platforms, bootloader support lags behind filesystem features. A boot pool can become unbootable even when the OS could import it just fine.
  8. Encryption isn’t “just a dataset property.” Native ZFS encryption depends on feature support; pools created on older stacks won’t magically gain it without feature flags being present.
  9. Large blocks changed the economics of sequential workloads. The ability to safely use larger record sizes (and underlying large block features) can cut metadata overhead and improve throughput—until it collides with small random IO.

How to read zpool get feature@* output like an operator

Let’s demystify the output. On an OpenZFS system, you’ll see properties like:

cr0x@server:~$ sudo zpool get -H -o name,property,value,source feature@* tank
tank	feature@async_destroy	enabled	local
tank	feature@empty_bpobj	active	local
tank	feature@lz4_compress	active	local
tank	feature@spacemap_histogram	active	local
tank	feature@enabled_txg	enabled	local
tank	feature@extensible_dataset	enabled	local
tank	feature@bookmarks	enabled	local
tank	feature@filesystem_limits	enabled	local
tank	feature@device_removal	disabled	-
tank	feature@allocation_classes	enabled	local
tank	feature@embedded_data	active	local
tank	feature@hole_birth	active	local
tank	feature@large_blocks	active	local
tank	feature@sha512	enabled	local
tank	feature@skein	enabled	local
tank	feature@encryption	enabled	local

Three columns matter day-to-day:

  • property: the feature name, like feature@large_blocks
  • value: disabled, enabled, or active
  • source: where that setting came from (local, -, etc.)

The operational reading is:

  • disabled: this feature is not available on this pool. Either your ZFS implementation doesn’t know it, or the pool hasn’t been upgraded to advertise it.
  • enabled: the pool advertises the feature and can activate it if needed. You may still be compatible with older systems if nothing has made it “active.”
  • active: something has used it and wrote new metadata. Your “import surface area” has shrunk: systems lacking that feature won’t import.

There’s a subtle but critical distinction: zpool upgrade can move many features to “enabled” without making them “active”. But as soon as you do certain operations—turn on compression types, use bookmarks, create encrypted datasets, enable device removal, etc.—you may flip to “active.” The “flip” isn’t a drama; it’s a metadata write. ZFS doesn’t do drama. It does irrevocable metadata formats with excellent checksums.

Joke #2 (short, relevant): The fastest way to learn feature flags is to ignore them once—nothing teaches like a pool that refuses to import at 2 a.m.

Practical tasks (commands + interpretation)

Below are operator tasks I’ve actually used in production-like environments. Each includes commands and what you should conclude from them.

Task 1: Dump all pool feature flags (the baseline)

cr0x@server:~$ sudo zpool get feature@* tank
NAME  PROPERTY                     VALUE     SOURCE
tank  feature@async_destroy         enabled   local
tank  feature@empty_bpobj           active    local
tank  feature@lz4_compress          active    local
tank  feature@spacemap_histogram    active    local
tank  feature@enabled_txg           enabled   local
tank  feature@extensible_dataset    enabled   local
tank  feature@bookmarks             enabled   local
tank  feature@filesystem_limits     enabled   local
tank  feature@device_removal        disabled  -
tank  feature@allocation_classes    enabled   local
tank  feature@embedded_data         active    local
tank  feature@hole_birth            active    local
tank  feature@large_blocks          active    local
tank  feature@sha512                enabled   local
tank  feature@skein                 enabled   local
tank  feature@encryption            enabled   local

Interpretation: This pool is already beyond “legacy compatibility” because several features are active. If you expected to import this pool on an older host (or a different vendor appliance), you should verify that host supports the active set, not the enabled set.

Task 2: Show only “active” features (the compatibility line you crossed)

cr0x@server:~$ sudo zpool get -H -o property,value feature@* tank | awk '$2=="active"{print}'
feature@empty_bpobj	active
feature@lz4_compress	active
feature@spacemap_histogram	active
feature@embedded_data	active
feature@hole_birth	active
feature@large_blocks	active

Interpretation: This list is your “must support” set for any system that needs to import the pool. If a DR host can’t import, start by comparing its supported features with this active list.

Task 3: Show only “enabled but not active” (what could become your next problem)

cr0x@server:~$ sudo zpool get -H -o property,value feature@* tank | awk '$2=="enabled"{print}'
feature@async_destroy	enabled
feature@enabled_txg	enabled
feature@extensible_dataset	enabled
feature@bookmarks	enabled
feature@filesystem_limits	enabled
feature@allocation_classes	enabled
feature@sha512	enabled
feature@skein	enabled
feature@encryption	enabled

Interpretation: These are “armed” capabilities. You’re not necessarily using them yet, but a future admin action (or a future default) might activate them. If you maintain strict compatibility targets (e.g., an older replication receiver), you may want policy controls around which features are allowed to become active.

Task 4: Capture a “pool capability snapshot” for change review

cr0x@server:~$ sudo zpool get -H -o property,value,source feature@* tank | sort > /var/tmp/tank.features.$(date +%F).txt
cr0x@server:~$ tail -n 5 /var/tmp/tank.features.$(date +%F).txt
feature@spacemap_histogram	active	local
feature@userobj_accounting	disabled	-
feature@zilsaxattr	disabled	-
feature@zpool_checkpoint	disabled	-
feature@zstd_compress	disabled	-

Interpretation: Treat feature flag state like schema migrations in a database. Save it before upgrades, before platform moves, and before enabling new capabilities like encryption or special vdevs.

Task 5: Check pool version/feature system info together (reduce guessing)

cr0x@server:~$ sudo zpool status -x
all pools are healthy
cr0x@server:~$ modinfo zfs 2>/dev/null | head -n 5
filename:       /lib/modules/6.8.0/kernel/zfs/zfs.ko
version:        2.2.4-1
license:        CDDL
description:    ZFS
author:         OpenZFS

Interpretation: Knowing “what ZFS you are” matters when comparing feature support across hosts. Same pool, different ZFS modules, different supported features. Avoid “it’s Linux so it’s fine” thinking—OpenZFS feature availability depends on the shipped version.

Task 6: See what zpool upgrade would do before doing it

cr0x@server:~$ sudo zpool upgrade
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.

Some supported features are not enabled on the following pools.
Note that the pool may be upgraded to use these features, but doing so
may prevent the pool from being imported on other systems that do not
support the features.

POOL  FEATURE
tank  project_quota
tank  spacemap_v2

Interpretation: This is your “diff.” It tells you the features your current software supports that your pool hasn’t enabled yet. If your DR site or replication target lags behind, this output is a warning label, not a suggestion.

Task 7: Upgrade a pool deliberately (and understand the blast radius)

cr0x@server:~$ sudo zpool upgrade tank
This system supports ZFS pool feature flags.

Enabled the following features on 'tank':
  project_quota
  spacemap_v2

Interpretation: This typically marks features as enabled, not necessarily active. But you’ve now made it possible for future operations to activate them—and you may have reduced compatibility with systems that can’t even recognize the enabled features, depending on implementation. The safe assumption: after enabling new features, test import on any “must import” systems.

Task 8: Find feature-related import failures (read the error, don’t freestyle)

cr0x@drhost:~$ sudo zpool import
   pool: tank
     id: 1234567890123456789
  state: UNAVAIL
status: The pool uses the following feature(s) not supported on this system:
        spacemap_histogram
        embedded_data
action: Upgrade the system to support the pool features, or recreate the pool from backup.
   see: zpool-features(7)
config:

        tank        UNAVAIL  unsupported feature(s)
          raidz2-0  ONLINE
            sda     ONLINE
            sdb     ONLINE
            sdc     ONLINE
            sdd     ONLINE

Interpretation: This output hands you the culprit list. Cross-check it against “active features” on the source. If the DR host can’t be upgraded, your options are: keep a compatibility pool for replication, use file-level copies, or rebuild a pool without those features (usually requiring data migration).

Task 9: Map pool feature flags to dataset behavior you care about (encryption example)

cr0x@server:~$ sudo zpool get -H -o value feature@encryption tank
enabled
cr0x@server:~$ sudo zfs create -o encryption=on -o keyformat=passphrase tank/secret
Enter new passphrase:
Re-enter new passphrase:
cr0x@server:~$ sudo zfs get -H -o property,value encryption,keystatus tank/secret
encryption	on
keystatus	available

Interpretation: Pool feature support is a prerequisite. The pool can advertise feature@encryption as enabled, but it becomes operationally relevant once you actually create encrypted datasets. That can also affect replication workflows and boot-time key handling.

Task 10: Verify compression feature alignment (zstd vs lz4)

cr0x@server:~$ sudo zpool get -H -o value feature@lz4_compress tank
active
cr0x@server:~$ sudo zpool get -H -o value feature@zstd_compress tank
disabled
cr0x@server:~$ sudo zfs set compression=lz4 tank/data
cr0x@server:~$ sudo zfs get -H -o value compression tank/data
lz4

Interpretation: If you’re replicating to a target that doesn’t support zstd, using zstd compression on the source can become a migration trap. The feature flags can warn you before you start “optimizing” with the newest codec.

Task 11: Detect whether large blocks are in play (and why it changes tuning)

cr0x@server:~$ sudo zpool get -H -o value feature@large_blocks tank
active
cr0x@server:~$ sudo zfs get -H -o property,value recordsize tank/data
recordsize	128K
cr0x@server:~$ sudo zfs set recordsize=1M tank/data
cr0x@server:~$ sudo zfs get -H -o value recordsize tank/data
1M

Interpretation: Large blocks can be a big win for sequential IO (backups, media, analytics), but they’re not free. Once you start pushing recordsize up, random overwrite workloads can punish you with read-modify-write amplification. Feature flags tell you whether the pool can support the metadata required for large blocks safely.

Task 12: Confirm what changed after a risky operation (before/after diff)

cr0x@server:~$ sudo zpool get -H -o property,value feature@* tank | sort > /var/tmp/features.before
cr0x@server:~$ sudo zfs set compression=zstd tank/data
cannot set property for 'tank/data': 'compression' feature is not enabled
cr0x@server:~$ sudo zpool get -H -o property,value feature@* tank | sort > /var/tmp/features.after
cr0x@server:~$ diff -u /var/tmp/features.before /var/tmp/features.after

Interpretation: ZFS refused the change because the pool doesn’t advertise the zstd compression feature. This is the happy path: you learned without activating anything. Do this kind of “intentional failure testing” in a change window when you’re considering new features.

Task 13: Tie feature flags to space accounting and allocator behavior (spacemap histogram)

cr0x@server:~$ sudo zpool get -H -o value feature@spacemap_histogram tank
active
cr0x@server:~$ sudo zdb -bbbbb tank 2>/dev/null | head -n 20

Interpretation: When you’re deep in allocator performance or fragmentation discussions, spacemap-related features matter. If you’re comparing two pools and one has newer spacemap features active, their behavior under churn can differ enough to invalidate naive benchmarking.

Task 14: Verify replication receiver won’t choke (practical compatibility check)

cr0x@source:~$ sudo zpool get -H -o property,value feature@* tank | awk '$2=="active"{print $1}' | sort > /var/tmp/source.active
cr0x@receiver:~$ sudo zpool get -H -o property,value feature@* backup | awk '$2!="disabled"{print $1}' | sort > /var/tmp/recv.known
cr0x@receiver:~$ comm -23 /var/tmp/source.active /var/tmp/recv.known
feature@embedded_data
feature@spacemap_histogram

Interpretation: This isn’t a perfect compatibility oracle (dataset stream features and property support also matter), but it’s a strong early warning. If the receiver doesn’t even recognize a feature that’s active on the source pool, expect import and possibly send/receive pain.

Three corporate-world mini-stories

Mini-story 1: An incident caused by a wrong assumption

The setup was classic corporate “hybrid maturity.” Production ran on newer Linux with a modern OpenZFS module. DR ran on an older, vendor-supported appliance that hadn’t been updated in a while because “storage is stable” and nobody wanted to reopen the procurement conversation. Replication was file-based for some datasets and ZFS send/receive for others, and it mostly worked—until the day it didn’t.

A routine hardware failure in production escalated into a facility issue. The DR runbook said, “Import the replicated pool on DR and bring services up.” The on-call did exactly that, and the pool import failed with “unsupported feature(s).” The immediate human reaction was predictable: distrust the DR box, assume a cable problem, reseat disks, try again, swap controllers, try again. Time disappeared into the void where hope goes to die.

The root cause was boring: at some point, production had enabled newer features during a maintenance window—nothing dramatic, just “keeping current.” Those features stayed enabled quietly, and later became active when someone used a capability that depended on them (the kind of action that feels like a dataset tweak, not a pool compatibility event). DR never got updated, and nobody compared feature flags between environments.

The fix wasn’t a clever hack. They upgraded the DR platform (after an emergency exception process that took less time than the failed troubleshooting), re-imported, and recovered. The postmortem action item that actually stuck was simple: feature flag snapshots became part of every DR readiness review. Not as a spreadsheet exercise—actual command output stored with change records.

Mini-story 2: An optimization that backfired

A different org had a performance problem: nightly analytics jobs were running long, and the storage team was pressured to “make ZFS faster.” Someone suggested larger records and newer compression. Reasonable ideas, and in a lab they looked great: higher throughput, fewer IOs per gigabyte. The team scheduled a change and rolled it out with confidence.

Then the support tickets arrived. Latency-sensitive services sharing the pool started missing SLOs. The analytics jobs were happier, but the mixed workload turned into a knife fight: random writes and rewrites began paying the price of read-modify-write amplification, and the cache behavior got weird. On top of that, they had toggled into newer feature territory, which made their “just in case” rollback plan fantasy. They could roll back properties; they could not roll back time.

The technical failure wasn’t “large blocks are bad.” It was assuming the pool was a single workload. Feature flags were part of the story because they made the optimization durable: once the pool and datasets started living in the newer metadata world, moving the data back to older systems—or even just standing up a parallel environment with older compatibility—became a project.

The eventual resolution was more architectural than heroic: split workloads into separate pools (or at least separate vdev classes where appropriate), use large records where the access pattern was truly sequential, and keep conservative defaults for general-purpose datasets. The lesson the team wrote down in plain English was: “Every performance knob is also a compatibility knob, even if it doesn’t look like one.”

Mini-story 3: A boring but correct practice that saved the day

This one didn’t make anyone famous, which is exactly why it worked. A platform team had a rule: before any ZFS-related upgrade—kernel, OpenZFS package, or pool upgrade—they captured a standard “storage state bundle.” It included zpool status, zpool get feature@*, key dataset properties, and a quick replication test to a staging receiver. No exceptions, no “we’ll do it later.”

During an infrastructure refresh, they needed to move a pool to new hardware quickly. The plan was to export, move disks, import. Simple. The team ran their state bundle and noticed an uncomfortable detail: a handful of features were enabled but not active, and one of the downstream consumers was an older system used for a compliance archive. That system didn’t have a path to upgrade in the timeline.

Because they caught it early, they made a deliberate choice: do not activate any new features during the migration window, keep that archive replication path stable, and schedule a longer-term upgrade of the archive system. They also verified that the target boot environment could handle the pool’s feature set, avoiding a boot-pool footgun.

The migration happened, nothing broke, and nobody outside the team noticed—which is the highest compliment in operations. In the retrospective, the “hero move” was literally a text file containing zpool get feature@* output, captured at the right time.

Fast diagnosis playbook

When something smells off—import failures, replication errors, performance regressions—feature flags aren’t always the cause, but they’re often the fastest way to stop guessing. Here’s a pragmatic playbook that gets you to “is this a feature compatibility issue?” quickly, then pivots to bottleneck hunting.

Step 1: Determine if you’re dealing with compatibility or performance

If the problem is “can’t import,” it’s almost certainly compatibility. If the problem is “slow,” feature flags might be contributing indirectly (allocator behavior, block sizes, etc.), but you still need to identify the bottleneck.

cr0x@server:~$ sudo zpool import
cr0x@server:~$ sudo zpool status -x
cr0x@server:~$ sudo zpool get -H -o property,value feature@* tank | awk '$2!="disabled"{print}' | head

Interpretation: Import errors explicitly list unsupported features. If import works, move on to performance checks.

Step 2: For import/replication issues, compare “active” features to the other host

cr0x@source:~$ sudo zpool get -H -o property,value feature@* tank | awk '$2=="active"{print $1}' | sort
cr0x@target:~$ sudo zpool get -H -o property,value feature@* backup | awk '$2!="disabled"{print $1}' | sort

Interpretation: Any active-on-source feature unknown/disabled on target is a red flag. Don’t troubleshoot cables when your metadata is speaking clearly.

Step 3: For performance, check the classic ZFS bottlenecks in order

  1. Pool health and vdev errors (a degraded pool can be “working” but slow)
  2. IO saturation (one vdev pinned while others idle)
  3. ARC pressure (cache misses causing disk churn)
  4. Sync write behavior (SLOG or lack thereof; app doing fsync storms)
  5. Recordsize/compression mismatch (workload not aligned)
cr0x@server:~$ sudo zpool status -v tank
cr0x@server:~$ sudo zpool iostat -v tank 1 10
cr0x@server:~$ sudo arcstat 1 10
cr0x@server:~$ sudo zpool get -H -o property,value ashift,autotrim,feature@* tank | head -n 20
cr0x@server:~$ sudo zfs get -r -H -o name,property,value recordsize,compression,atime,sync tank/data | head -n 20

Interpretation: Feature flags won’t tell you “this vdev is saturated,” but they’ll tell you whether you’re in a world where certain optimizations are possible or already active. Combine both: capabilities and current behavior.

Common mistakes: symptoms and fixes

Mistake 1: Treating “enabled” as “safe” without understanding activation

Symptom: “We upgraded the pool weeks ago and everything was fine; now DR import fails.”

Why it happens: You enabled features, then later performed an action that activated one. The break appears delayed.

Fix: Track active features over time. Implement a policy: if DR/replication target lags, limit operations that can activate incompatible features until targets are upgraded.

Mistake 2: Upgrading a boot pool like it’s a data pool

Symptom: System imports the pool fine from a rescue environment, but won’t boot normally.

Why it happens: Bootloader feature support is not identical to runtime ZFS support on some platforms.

Fix: Validate bootloader constraints before enabling new features on boot pools. Keep boot pools conservative; upgrade them only with explicit platform confirmation and rollback plan.

Mistake 3: Assuming send/receive will “just work” across mixed versions

Symptom: Replication fails with errors about unsupported features, bookmarks, or stream incompatibility.

Why it happens: Source creates streams requiring features not supported on receiver (pool-level and dataset-level).

Fix: Standardize receiver versions or use a compatibility staging host that can receive from source and then re-send in a constrained format (when possible). More often, fix is “upgrade the receiver.”

Mistake 4: Enabling shiny features during an incident

Symptom: A recovery attempt makes things worse: new pool can’t be imported elsewhere, replication chain breaks, or rollback options vanish.

Why it happens: Under pressure, someone runs zpool upgrade -a or changes properties to “improve performance,” activating features unintentionally.

Fix: In incident mode, freeze feature state. Make “no pool upgrades during incidents” a standing rule unless the upgrade is the explicit fix with a confirmed compatibility plan.

Mistake 5: Using feature flags as a performance tuning proxy

Symptom: “We enabled everything and it’s still slow,” or performance regresses after “upgrading.”

Why it happens: Features are capabilities, not throughput multipliers. Performance depends on vdev layout, workload, ARC, sync behavior, fragmentation, and more.

Fix: Use the fast diagnosis playbook: prove the bottleneck with iostat/latency/ARC stats first, then decide if features and properties are relevant.

Checklists / step-by-step plan

Checklist: Before you run zpool upgrade in production

  1. Capture current feature state.
  2. Identify every system that must import or receive from this pool.
  3. Confirm their ZFS/OpenZFS versions and supported features.
  4. Run zpool upgrade (no args) to see what would change.
  5. Decide: upgrade now, upgrade later, or upgrade selectively (pool-by-pool).
  6. Stage and test import/receive on a non-prod host if you can.
cr0x@server:~$ sudo zpool get -H -o property,value,source feature@* tank | sort > /var/tmp/tank.features.pre
cr0x@server:~$ sudo zpool upgrade
cr0x@server:~$ sudo zpool upgrade tank
cr0x@server:~$ sudo zpool get -H -o property,value,source feature@* tank | sort > /var/tmp/tank.features.post
cr0x@server:~$ diff -u /var/tmp/tank.features.pre /var/tmp/tank.features.post | head -n 50

Checklist: When you’re planning a platform migration

  1. On the source: list active features.
  2. On the target: confirm it recognizes/supports those features.
  3. Confirm boot pool constraints if the pool is boot-critical.
  4. Export/import in a controlled window; verify with a scrub schedule.
cr0x@source:~$ sudo zpool get -H -o property,value feature@* tank | awk '$2=="active"{print $1}' | sort > /var/tmp/tank.active
cr0x@target:~$ sudo zpool get -H -o property,value feature@* tank 2>/dev/null | awk '$2!="disabled"{print $1}' | sort > /var/tmp/target.known
cr0x@target:~$ comm -23 /var/tmp/tank.active /var/tmp/target.known

Checklist: Build a “compatibility contract” for DR

  1. Define the oldest system that must import pools.
  2. Freeze pool feature activation beyond what that system supports.
  3. Schedule upgrades of DR alongside production, not “later.”
  4. Audit quarterly: compare active features and run a test import/receive.
cr0x@prod:~$ sudo zpool get -H -o property,value feature@* tank | awk '$2=="active"{print $1}' | sort > /var/tmp/prod.active
cr0x@dr:~$ sudo zpool import 2>&1 | sed -n '1,30p'

FAQ

1) What does zpool get feature@* actually show?

It shows the pool’s feature flag properties: which on-disk features are disabled, enabled, or active. This is the most direct view of what the pool can do and what metadata formats it may already be using.

2) What’s the difference between “enabled” and “active”?

Enabled means the pool advertises the feature and can use it. Active means the feature is in use on disk—metadata has been written that requires that feature. Active is the line that typically breaks import compatibility with older systems.

3) Does running zpool upgrade immediately make features active?

Usually it enables them, but activation typically occurs when you perform operations that require those features. The operational risk is delayed: you can enable today and break DR import next month when a feature becomes active.

4) Can I “downgrade” a pool or turn off an active feature?

Not in the general case. Once active, the pool’s on-disk state depends on it. The practical downgrade path is migrate data to a new pool created with the desired compatibility constraints.

5) Why does a pool import fail on another system?

Because that system’s ZFS implementation does not support one or more features that are active on the pool. The import error typically lists the exact feature names.

6) Are feature flags the same across all ZFS implementations?

In the OpenZFS ecosystem they’re largely aligned, but not all platforms ship the same OpenZFS version, and some appliance vendors lag or restrict upgrades. Treat “ZFS” as a family name; feature support depends on the specific build.

7) Do feature flags tell me why performance is slow?

Not directly. They tell you what capabilities exist (or are in use) that might influence performance characteristics—like spacemap behavior or large block support. For actual bottlenecks, use zpool iostat, ARC stats, and workload-specific metrics.

8) How do feature flags relate to dataset properties like compression or encryption?

Some dataset properties require pool-level feature support. If the pool doesn’t have the relevant feature enabled, ZFS will refuse to set the property (best case) or you’ll activate a feature when you start using the capability (common case).

9) If a feature is “enabled” but not “active,” should I worry?

You should at least be aware. Enabled features are a capability you might activate accidentally through normal operations. If you have strict interoperability requirements, enabled-but-not-active is a change-management item.

10) What’s the single most useful thing to store for future troubleshooting?

A dated snapshot of zpool get feature@* output (plus zpool status) taken before and after upgrades. It’s low-effort evidence that prevents high-effort guessing later.

Conclusion

zpool get feature@* is the closest thing ZFS has to a truth serum. It tells you what your pool is capable of, what it’s already committed to on disk, and where compatibility lines are drawn—whether you meant to draw them or not. In production, that’s not trivia; it’s the difference between a controlled upgrade and a surprise migration project.

If you take only one operational habit from this: track active features like you track schema versions. Save the output, diff it during changes, and make it part of DR readiness. ZFS will happily do the right thing for years—right up until you ask an older system to understand a future it never learned.

← Previous
Memory temperatures: why GDDR6X is its own problem
Next →
Betamax vs VHS, Tech Edition: Why Quality Doesn’t Always Win

Leave a comment