ZFS zpool import -d: Finding Pools When Device Paths Change

Was this helpful?

ZFS is ruthless about data integrity and oddly forgiving about everything else. You can yank a controller, swap a backplane, reorder cabling, and ZFS will usually keep your data safe. But it may “forget” where your pool lives, because the operating system’s idea of disk names is… interpretive. Today’s /dev/sdb is tomorrow’s /dev/sde, and the pool that used to import automatically now stares back at you like it’s never met you.

This is where zpool import -d earns its keep. It’s not magic; it’s just telling ZFS where to look for device nodes (or files) that might contain pool members. The trick is knowing what directory to hand it, what it does with that directory, and how to interpret the results under pressure—like when a VP is asking why the “data lake” is currently a “data mirage.”

What zpool import -d actually does

zpool import is ZFS’s “find pools that exist but aren’t currently imported” tool. When you run it with no arguments, it searches a default set of locations for devices that might contain ZFS labels (the metadata at the start/end of each vdev).

zpool import -d <dir> changes the search scope. You’re telling ZFS: “Search this directory for block devices (or files) that may contain ZFS labels.” It will scan entries under that directory, open devices, read labels, and attempt to assemble candidate pools.

Key behaviors that matter in production

  • -d is about where to look, not what to import. It affects discovery. Import still follows normal rules (hostid/active pool checks, mountpoint behavior, etc.).
  • You can specify multiple -d options. This is gold when you have a mix of /dev/disk/by-id, multipath devices, or staging directories.
  • -d can point at a directory of files. Yes: importing pools from image files (for forensics or lab restores) is a real thing.
  • Discovery can be slowed by “too much directory.” Pointing it at /dev on a system with lots of device nodes, stale multipath maps, or weird container mounts can turn “quick scan” into “why is this hanging.”

Joke #1: If you’ve ever trusted /dev/sdX names in production, you’re either very brave or you’ve never rebooted.

Why device paths change (and why ZFS cares)

Linux block device names like /dev/sda are assigned based on discovery order. Discovery order changes because the world changes: different firmware timings, HBA resets, kernel updates, a disk that spins up slower today, a PCIe bus re-enumeration, or multipath taking a different route first.

ZFS stores vdev identity in labels using stable identifiers when available, but it still needs the OS to present a device node it can open and read. If your pool was created on /dev/sdb and later you’ve configured cachefile or systemd services expecting those paths, you can end up with a boot that doesn’t auto-import or an import that uses an unexpected path.

Common path families you’ll meet

  • /dev/sdX: short, convenient, and unstable across reboots and topology changes.
  • /dev/disk/by-id/: stable, based on WWN/serial; preferred for servers.
  • /dev/disk/by-path/: stable-ish, describes bus path; can change if you move cables/HBAs.
  • /dev/mapper/mpath*: multipath devices; stable but requires correct multipath config (and discipline).
  • /dev/zvol/: ZFS volumes presented as block devices; not relevant for import but often confused in incident chats.

Interesting facts and historical context

Some context makes today’s weird behavior feel less random—and helps you explain it to people who think storage is “just disks.”

  1. ZFS stores multiple labels per device. Labels are written at both the start and end of each vdev member, which increases resilience to partial overwrites and makes discovery possible even when one end is damaged.
  2. Early ZFS design prioritized self-description. Pools describe themselves: topology, vdev GUIDs, and config are embedded on-disk, not in a separate “volume manager database.”
  3. The “hostid” check exists to stop you from importing an active pool elsewhere. It’s a safety feature against split-brain and double-mount disasters.
  4. /dev/sdX naming is intentionally not stable. Linux treats it as an implementation detail, not an interface contract. That’s why udev’s by-id/by-path exist.
  5. ZFS predates a lot of today’s Linux boot expectations. Many “ZFS didn’t import at boot” issues are really init system ordering, udev timing, or cachefile handling—modern boot stacks are fast, and storage sometimes isn’t.
  6. Multipath can make import look “haunted.” If both raw paths and multipath maps exist, ZFS can see duplicates. Your pool can be “importable” twice, and neither is the one you want.
  7. Importing read-only is an intentional feature. It allows safe forensics and reduces risk when you’re not sure the pool was cleanly exported.
  8. ZFS can import pools from regular files. That’s not a party trick; it’s how many labs test upgrades and how some incident teams do quick offline recovery checks.

Fast diagnosis playbook

When a pool “disappears” after a path change, your goal is to answer three questions quickly: (1) are the disks visible, (2) are ZFS labels readable, and (3) is something blocking import (hostid, active use, multipath, missing vdevs).

First: confirm the OS sees the hardware you think it sees

cr0x@server:~$ lsblk -o NAME,TYPE,SIZE,MODEL,SERIAL,WWN,FSTYPE,MOUNTPOINTS
NAME   TYPE  SIZE MODEL            SERIAL        WWN                FSTYPE MOUNTPOINTS
sda    disk  1.8T HGST_HUS724...    K7J1...       0x5000cca25...             
sdb    disk  1.8T HGST_HUS724...    K7J2...       0x5000cca25...             
nvme0n1 disk 3.6T SAMSUNG_MZ...     S4EV...       0x0025385...               

Interpretation: if the expected WWNs/serials aren’t present, this is not a ZFS problem yet. It’s cabling, HBA, expander, multipath, or a dead disk. If the disks are present but names changed, proceed.

Second: check what ZFS thinks is importable

cr0x@server:~$ sudo zpool import
   pool: tank
     id: 12345678901234567890
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        tank                          ONLINE
          raidz1-0                    ONLINE
            /dev/sdb                  ONLINE
            /dev/sdc                  ONLINE
            /dev/sdd                  ONLINE

Interpretation: if this shows /dev/sdX paths you no longer trust, you can still import—but you should fix persistent naming immediately afterward. If nothing shows up, you likely need -d to point ZFS at the right namespace.

Third: search in the stable namespace and look for duplicates

cr0x@server:~$ sudo zpool import -d /dev/disk/by-id
   pool: tank
     id: 12345678901234567890
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        tank                                            ONLINE
          raidz1-0                                      ONLINE
            wwn-0x5000cca25abcd001                      ONLINE
            wwn-0x5000cca25abcd002                      ONLINE
            wwn-0x5000cca25abcd003                      ONLINE

Interpretation: this is what you want. If it also shows up via /dev/sdX or via /dev/mapper, stop and decide which layer is authoritative. Importing via the “wrong” layer is how you earn a weekend.

Fourth: if import is blocked, identify why before forcing

cr0x@server:~$ sudo zpool import tank
cannot import 'tank': pool may be in use from other system
use '-f' to import anyway

Interpretation: do not reflexively add -f. Confirm the pool isn’t actually imported somewhere else (or wasn’t cleanly exported) and confirm you’re not seeing the same LUNs from two paths (SAN zoning mistakes happen).

Practical tasks: commands + interpretation

Below are the tasks I actually run in the field. Each has a reason, a command, and how to read the result. Use them as building blocks.

Task 1: Show importable pools and the device paths ZFS currently sees

cr0x@server:~$ sudo zpool import
   pool: backup
     id: 8811223344556677889
  state: DEGRADED
status: One or more devices could not be opened.
action: The pool can be imported despite missing devices.
   see: http://zfsonlinux.org/msg/ZFS-8000-2Q
 config:

        backup                         DEGRADED
          mirror-0                     DEGRADED
            /dev/sda                   ONLINE
            15414153927465090721       UNAVAIL

Interpretation: ZFS is telling you a device by GUID is missing. That’s not “device renaming”; that’s “device not present” (or present under a different namespace you didn’t scan).

Task 2: Search a specific directory for devices (-d basics)

cr0x@server:~$ sudo zpool import -d /dev/disk/by-id
   pool: backup
     id: 8811223344556677889
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
 config:

        backup                                           ONLINE
          mirror-0                                       ONLINE
            wwn-0x5000c500a1b2c3d4                        ONLINE
            wwn-0x5000c500a1b2c3e5                        ONLINE

Interpretation: the “missing” disk was actually present; you just weren’t looking in the right place.

Task 3: Use multiple -d to cover mixed environments

cr0x@server:~$ sudo zpool import -d /dev/disk/by-id -d /dev/mapper
   pool: sanpool
     id: 4000111122223333444
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
 config:

        sanpool                                          ONLINE
          mirror-0                                       ONLINE
            mpatha                                       ONLINE
            mpathb                                       ONLINE

Interpretation: if your storage is multipathed, prefer importing via /dev/mapper/mpath* or by-id for the multipath device, not the raw /dev/sdX paths. Consistency matters more than taste.

Task 4: Detect duplicate device presentations (raw paths vs multipath)

cr0x@server:~$ ls -l /dev/disk/by-id | grep -E 'wwn|dm-uuid|mpath' | head
lrwxrwxrwx 1 root root  9 Dec 25 09:10 dm-uuid-mpath-3600508b400105e210000900000490000 -> ../../dm-2
lrwxrwxrwx 1 root root  9 Dec 25 09:10 mpath-3600508b400105e210000900000490000 -> ../../dm-2
lrwxrwxrwx 1 root root  9 Dec 25 09:10 wwn-0x600508b400105e210000900000490000 -> ../../sdb

Interpretation: if the same LUN appears as both dm-* and sd*, you must decide which one ZFS should use. Importing with the raw sd* devices while multipath is active is how you invite intermittent I/O errors and path flaps into your pool.

Task 5: Look at ZFS labels directly (sanity check)

cr0x@server:~$ sudo zdb -l /dev/disk/by-id/wwn-0x5000cca25abcd001 | head -n 25
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'tank'
    state: 0
    txg: 1234567
    pool_guid: 12345678901234567890
    vdev_guid: 11112222333344445555
    top_guid: 66667777888899990000
    guid: 11112222333344445555

Interpretation: if zdb -l can read a label, discovery should work. If it can’t, you may have a device permission issue, a dying disk, or you’re pointing at the wrong thing (partition vs whole disk, mapper vs raw).

Task 6: Import by numeric pool ID (useful when names collide or are confusing)

cr0x@server:~$ sudo zpool import
   pool: tank
     id: 12345678901234567890
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.

cr0x@server:~$ sudo zpool import -d /dev/disk/by-id 12345678901234567890

Interpretation: importing by ID avoids mistakes when someone renamed pools inconsistently across environments (yes, that happens).

Task 7: Import read-only to inspect safely

cr0x@server:~$ sudo zpool import -o readonly=on -d /dev/disk/by-id tank
cr0x@server:~$ zpool status tank
  pool: tank
 state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            wwn-0x5000cca25abcd001                      ONLINE       0     0     0
            wwn-0x5000cca25abcd002                      ONLINE       0     0     0
            wwn-0x5000cca25abcd003                      ONLINE       0     0     0

Interpretation: read-only import is a low-risk way to confirm you found the right pool and that it’s healthy before you let services loose on it.

Task 8: Import without mounting datasets (control blast radius)

cr0x@server:~$ sudo zpool import -N -d /dev/disk/by-id tank
cr0x@server:~$ zfs list -o name,mountpoint,canmount -r tank | head
NAME            MOUNTPOINT  CANMOUNT
tank            /tank       on
tank/home       /home       on
tank/pg         /var/lib/pg on

Interpretation: -N imports the pool but doesn’t mount datasets. This is great when you want to set properties, check encryption keys, or avoid auto-starting a database that will immediately panic about missing network dependencies.

Task 9: Show the vdev paths ZFS will use after import (and fix them)

cr0x@server:~$ zpool status -P tank
  pool: tank
 state: ONLINE
config:

        NAME                       STATE     READ WRITE CKSUM
        tank                       ONLINE       0     0     0
          raidz1-0                 ONLINE       0     0     0
            /dev/sdb               ONLINE       0     0     0
            /dev/sdc               ONLINE       0     0     0
            /dev/sdd               ONLINE       0     0     0

Interpretation: -P shows full paths. If you see /dev/sdX, decide if you can tolerate it. In most environments, you can’t.

To migrate to stable by-id naming, you typically export and re-import using the desired paths:

cr0x@server:~$ sudo zpool export tank
cr0x@server:~$ sudo zpool import -d /dev/disk/by-id tank
cr0x@server:~$ zpool status -P tank | sed -n '1,25p'
  pool: tank
 state: ONLINE
config:

        NAME                                      STATE     READ WRITE CKSUM
        tank                                      ONLINE       0     0     0
          raidz1-0                                ONLINE       0     0     0
            /dev/disk/by-id/wwn-0x5000cca25abcd001 ONLINE       0     0     0
            /dev/disk/by-id/wwn-0x5000cca25abcd002 ONLINE       0     0     0
            /dev/disk/by-id/wwn-0x5000cca25abcd003 ONLINE       0     0     0

Interpretation: this is the simplest path stabilization: change the import discovery and let ZFS record the chosen paths.

Task 10: Handle “pool may be in use” safely (hostid / force import)

cr0x@server:~$ sudo zpool import -d /dev/disk/by-id tank
cannot import 'tank': pool may be in use from other system
use '-f' to import anyway

Interpretation: common after moving disks to a new host, restoring a VM from snapshot, or after an unclean shutdown where the old system never exported the pool.

If you are sure the pool is not actively imported elsewhere, force import:

cr0x@server:~$ sudo zpool import -f -d /dev/disk/by-id tank

Interpretation: the -f flag bypasses the host check. Use it like a chainsaw: effective, but only when you’re holding it correctly.

Task 11: Recover from missing cachefile / wrong cachefile assumptions

cr0x@server:~$ sudo zpool get cachefile tank
NAME  PROPERTY  VALUE     SOURCE
tank  cachefile /etc/zfs/zpool.cache local

cr0x@server:~$ ls -l /etc/zfs/zpool.cache
ls: cannot access '/etc/zfs/zpool.cache': No such file or directory

Interpretation: some distros rely on a cachefile for auto-import at boot. If the file is missing or stale, the pool may not import automatically even though manual import works fine.

After importing the pool, regenerate the cachefile:

cr0x@server:~$ sudo zpool set cachefile=/etc/zfs/zpool.cache tank
cr0x@server:~$ sudo zpool export tank
cr0x@server:~$ sudo zpool import -d /dev/disk/by-id tank

Interpretation: exporting and importing causes the cachefile to be updated in a consistent way on many systems. (Exact boot integration varies, but the pattern holds.)

Task 12: Import when partitions are involved (whole disk vs part confusion)

cr0x@server:~$ lsblk -o NAME,SIZE,TYPE,FSTYPE /dev/sdb
NAME   SIZE TYPE FSTYPE
sdb    1.8T disk
sdb1  1.8T part zfs_member

cr0x@server:~$ sudo zpool import -d /dev/disk/by-id tank

Interpretation: if your pool was built on partitions, ensure you’re scanning a directory that includes partition symlinks (by-id usually does). If you point -d to a directory that only contains whole-disk nodes but the labels are on ...-part1, discovery can fail.

Task 13: Import a pool from image files (lab/forensics move)

cr0x@server:~$ sudo mkdir -p /srv/zfs-images
cr0x@server:~$ sudo losetup -fP /srv/zfs-images/disk0.img
cr0x@server:~$ sudo losetup -fP /srv/zfs-images/disk1.img
cr0x@server:~$ losetup -a | grep zfs-images
/dev/loop0: [2065]:12345 (/srv/zfs-images/disk0.img)
/dev/loop1: [2065]:12346 (/srv/zfs-images/disk1.img)

cr0x@server:~$ sudo zpool import -d /dev/ tanklab
cannot import 'tanklab': no such pool available

cr0x@server:~$ sudo zpool import -d /dev/loop tanklab

Interpretation: loop devices are still devices. Scan where they live. This shows why -d is conceptually “where to look,” not “what type of disk.”

Task 14: When import is slow or appears hung, narrow the scan

cr0x@server:~$ time sudo zpool import -d /dev
^C

cr0x@server:~$ time sudo zpool import -d /dev/disk/by-id
real    0m1.214s
user    0m0.084s
sys     0m0.190s

Interpretation: scanning /dev is the “I’m panicking” move. It can work, but you pay for it. The stable namespaces are faster and less error-prone.

Three corporate-world mini-stories

Mini-story 1: The incident caused by a wrong assumption

The outage didn’t start with ZFS. It started with a well-meaning hardware refresh: new HBAs with a newer firmware, installed during a maintenance window that was “definitely enough time.” The system came back, the OS booted, and the monitoring dashboard went red because the primary pool never imported.

The on-call engineer logged in and saw that the disks were present. They weren’t /dev/sd[bcd...] anymore; they were /dev/sd[fgh...]. That’s normal. The wrong assumption was that “normal” also meant “harmless.” The auto-import unit had been built around stale expectations: a cachefile that referenced old paths, plus a custom script that grepped for specific /dev/sdX nodes like it was 2009.

Under pressure, someone ran zpool import -f tank without specifying -d, which imported via the first set of devices ZFS found. Unfortunately, multipath had also been enabled in the new build, and the pool got imported using raw /dev/sdX paths even though the SAN team expected /dev/mapper/mpath*. Everything looked fine until a path failover happened—then I/O latency spiked and checksum errors started appearing, because the system was now juggling inconsistent device layers.

The fix was boring: export the pool, import it using the intended namespace (/dev/mapper in that environment), regenerate the cachefile, and delete the custom script. The postmortem conclusion was sharper: if your automation depends on /dev/sdX, you don’t have automation; you have a countdown timer.

Mini-story 2: The optimization that backfired

A different team wanted faster boots. They’d been told that ZFS import could add seconds, and seconds are apparently unacceptable when you run microservices that take minutes to become useful. So they “optimized” by stripping out disk-by-id rules from their initramfs and leaning on a simplified device discovery, assuming the pool would always show up under a predictable path.

It worked—right up until it didn’t. A minor kernel update changed the timing of device enumeration. The pool members appeared, but not in time for the import stage. The boot proceeded without the pool, services came up pointing at empty directories, and the node registered as “healthy” because the health checks were application-level, not storage-level. You could feel the blast radius: containers happily wrote to the root filesystem, filling it, while the real dataset sat unmounted like a silent witness.

When the team tried to fix it, they ran zpool import -d /dev because it “covers everything.” On that host, “everything” included hundreds of device nodes from container runtimes and a mess of stale mapper entries. Import scans got slow, timeouts piled up, and the recovery runbook became a choose-your-own-adventure.

The eventual fix was to undo the optimization: restore stable udev naming into the early boot environment, import using /dev/disk/by-id, and add a hard gate so services do not start unless the ZFS datasets are mounted. Boots were a couple seconds slower; incidents were hours shorter. The trade was obvious once someone had to live through it.

Mini-story 3: The boring but correct practice that saved the day

The calmest ZFS recovery I’ve seen happened in a company that treated storage like an aircraft system: unglamorous, checklisted, and never trusted “tribal knowledge.” They had a rule: every pool must be imported using by-id paths, and every system must have a verified export procedure before any hardware move.

During a data center migration, a chassis was powered down, disks moved, and powered back up. Predictably, device names changed. Also predictably, nobody cared. The runbook said: verify disks by WWN, run zpool import -d /dev/disk/by-id, import with -N, check status, then mount datasets. They followed it.

They hit one snag: the pool reported “may be in use from other system.” Instead of forcing immediately, they checked whether the old node still had access through a lingering SAN zone. It did. They removed the zone, waited for paths to disappear, and then imported cleanly—no force required.

The best part was the tone in the incident channel. No drama, no heroics, no “try this random flag.” Just a slow, correct sequence. It was boring in the way you want production to be. Joke #2: Their storage runbook was so dull it could cure insomnia—unless you enjoy sleeping through outages.

Common mistakes (with symptoms and fixes)

Mistake 1: Scanning /dev and hoping for the best

Symptom: zpool import -d /dev is slow, appears stuck, or returns confusing duplicates.

Fix: scan only stable namespaces: /dev/disk/by-id for local disks, /dev/mapper for multipath. Use multiple -d if needed.

cr0x@server:~$ sudo zpool import -d /dev/disk/by-id -d /dev/mapper

Mistake 2: Importing via raw paths when multipath is enabled

Symptom: pool imports, but later shows intermittent I/O errors, path flaps, or weird performance jitter during controller failover.

Fix: ensure ZFS sees only the multipath devices (blacklist raw paths in multipath config as appropriate), then export/import using /dev/mapper/mpath* (or the corresponding by-id for dm).

Mistake 3: Using -f as a first response

Symptom: pool may be in use from other system appears; operator forces import without verifying exclusivity.

Fix: confirm the old host is down or no longer has access to the disks/LUNs. If it’s a SAN, confirm zoning/masking. If it’s a VM, confirm you didn’t attach the same virtual disks to two guests.

Mistake 4: Confusing “missing disk” with “wrong directory scanned”

Symptom: zpool import shows a vdev as UNAVAIL by GUID, but lsblk shows the disk exists.

Fix: scan the namespace that includes the actual node holding the ZFS labels (partition vs whole disk). Try -d /dev/disk/by-id and verify with zdb -l.

Mistake 5: Importing and immediately mounting everything

Symptom: after import, services start too early or datasets mount in unexpected places, causing application corruption or “wrote to root FS” incidents.

Fix: import with -N, inspect, set needed properties, then mount deliberately.

cr0x@server:~$ sudo zpool import -N -d /dev/disk/by-id tank

Mistake 6: Believing the cachefile is authoritative truth

Symptom: pool imports manually but not at boot; or imports with odd paths after upgrades/moves.

Fix: regenerate cachefile after importing with desired device paths; ensure init system uses the right cachefile location for your distro.

Checklists / step-by-step plan

Step-by-step: “Pool missing after reboot” recovery

  1. Confirm hardware visibility: verify expected WWNs/serials.
  2. Scan stable namespace: use zpool import -d /dev/disk/by-id.
  3. If multipath: ensure you import via /dev/mapper, not raw paths.
  4. Import safely first: use -o readonly=on or -N depending on situation.
  5. Check health: zpool status, confirm no unexpected missing vdevs.
  6. Fix path persistence: export and re-import via by-id/mapper; regenerate cachefile.
  7. Mount datasets intentionally: then start services.

Checklist: Before you move disks to a new host

  1. Export the pool cleanly (zpool export) and confirm it’s gone (zpool list should not show it).
  2. Record pool layout: zpool status -P output saved in your ticket.
  3. Record WWNs: lsblk -o NAME,SERIAL,WWN.
  4. On the target host, confirm those WWNs appear before attempting import.
  5. Import with explicit -d pointing at your chosen namespace.

Checklist: Deciding your “one true path” policy

  • Local SATA/SAS: prefer /dev/disk/by-id/wwn-* or serial-based by-id.
  • SAN with multipath: prefer /dev/mapper/mpath* or dm-uuid-based by-id pointing to dm devices.
  • Avoid: raw /dev/sdX in anything that has a change window longer than a coffee break.

FAQ

1) What does zpool import -d scan exactly?

It scans device nodes (or files) found under the directory you specify, looking for ZFS labels. Think “search path for potential vdev members.” It doesn’t recursively scan your whole filesystem—just entries in that directory namespace.

2) Should I always use /dev/disk/by-id?

For local disks, yes in most production environments. It’s stable across reboots and controller enumeration changes. For SAN/multipath, /dev/mapper is often the correct layer. The important part is choosing one consistently.

3) Why does zpool import show GUIDs instead of device names sometimes?

When a vdev can’t be opened, ZFS falls back to the vdev GUID from the on-disk config. That’s a clue: ZFS knows what it wants, but the OS isn’t presenting it at the path it tried (or at all).

4) Is it safe to run zpool import -f?

It can be safe if you are certain the pool is not active elsewhere. It’s unsafe if the same disks/LUNs are accessible from another host that might also import them. Confirm exclusivity first.

5) Why does import work manually but not at boot?

Usually: device discovery timing (disks appear after import attempt), stale/missing cachefile, or init ordering. Manual import happens later, after udev and storage have settled.

6) Can I use zpool import -d to import a pool built on partitions?

Yes, as long as the directory you scan includes the partition nodes/symlinks (for example, ...-part1 under by-id). If you scan a directory containing only whole disks while the pool lives on partitions, discovery may fail.

7) What’s the difference between zpool import -d and fixing device names permanently?

-d is a discovery hint for this import attempt. Permanent stability comes from consistently importing using stable paths (by-id/mapper) and ensuring your boot process and cachefile align with that.

8) Why does import sometimes show the same pool twice?

Because the same underlying storage is visible through multiple device layers (raw path and multipath, or multiple symlink namespaces). That’s a signal to stop and pick the intended layer before importing.

9) How do I confirm a specific disk is part of the pool before importing?

Use zdb -l against the candidate device to read its label and confirm the pool name and GUID match what you expect.

Conclusion

zpool import -d is ZFS’s way of saying: “Tell me where the devices are today, and I’ll do the rest.” In a world where device names are assigned by timing and whim, being explicit about discovery paths is not optional—it’s how you keep recovery predictable.

When device paths change, don’t flail. Confirm the disks exist, scan the right namespace, import safely (-N or read-only), and then make the fix durable by standardizing on stable paths and keeping your boot/import plumbing honest. If you do that, device renames become a footnote instead of a headline.

← Previous
Proxmox GPU Passthrough Black Screen: UEFI/OVMF, ROM Issues, Reset Bugs, and Proven Fixes
Next →
WireGuard split tunneling: route only what you need (and keep the rest local)

Leave a comment