ZFS sharenfs/sharesmb: Convenience vs Control in Production

Was this helpful?

You built a nice ZFS dataset hierarchy. You flipped sharenfs=on or sharesmb=on because it looked clean.
Then a client can’t mount, or worse, a client can mount that absolutely shouldn’t. The pager goes off, and suddenly you’re spelunking
through three different layers of “helpful automation” that nobody remembers turning on.

ZFS share properties are a power tool. In the right hands they’re a torque wrench. In the wrong hands they’re a cordless angle grinder:
fast, loud, and leaving sparks on the carpet. This piece is about where that convenience ends—and how to keep control without turning
your storage host into a bespoke snowflake.

What sharenfs/sharesmb really do (and what they don’t)

sharenfs in one sentence

sharenfs is a ZFS dataset property that can instruct the host to export the dataset over NFS using the platform’s share
subsystem, usually by generating export rules dynamically and calling the OS share mechanism (not by magic).

sharesmb in one sentence

sharesmb is a ZFS dataset property that can instruct the host to publish the dataset as an SMB share using the platform’s
SMB service integration (on illumos/Solaris historically; on Linux it depends on your tooling and distro integration).

Why this matters: you are choosing an orchestration layer

ZFS itself does not speak NFS or SMB. It delegates. Those dataset properties are effectively a tiny share orchestrator glued to your
filesystem metadata. That’s both the appeal and the risk: the share definition travels with the dataset, can be inherited, cloned, and
replicated—then surprises you later because it’s still there.

The uncomfortable truth: behavior differs by OS and “ZFS flavor”

On illumos-derived systems (and older Solaris lineage), ZFS share properties are first-class citizens: zfs share,
zfs unshare, sharetab integration, SMB sharing through the OS service manager—the whole deal. On Linux with OpenZFS, the
properties exist, but share activation may be implemented through helper scripts, systemd units, or packaging defaults. Some
environments treat sharenfs as “metadata you can query” and nothing more until you install and enable the right services.

Control question: do you want shares to be “declared” at the ZFS layer (dataset properties) or “declared” at the service layer
(/etc/exports, Samba config, or a config management system)? If you can’t answer crisply, you’re about to discover the
answer at 3 AM.

What these properties do not do

  • They do not enforce network-layer security. They only influence share/export configuration.
  • They do not replace proper identity and permission design (UID/GID mapping, ACLs, SMB identities).
  • They do not guarantee consistent behavior across your fleet unless your OS tooling is consistent.
  • They do not prevent someone from editing /etc/exports or Samba config and drifting from the ZFS intent.

A share property is like writing “deliver to front door” on a package. It helps. It is not a door lock.

Convenience vs control: a production trade study

When share properties are a win

Use sharenfs/sharesmb when you want the dataset to be self-describing, especially when you:

  • Provision and deprovision datasets frequently (CI caches, project workspaces, ephemeral analytics sandboxes).
  • Clone datasets and want the share intent to follow the clone (sometimes you do, often you don’t—hold that thought).
  • Use ZFS replication and want the receiving side to know “this dataset is meant to be exported,” even if you don’t auto-export it.
  • Need consistent, queryable state: “Which datasets are shared?” is one zfs get away.

When share properties become a liability

Avoid relying on them as your primary control plane when you:

  • Need explicit change control for exports/shares (audit requirements, security approvals, segregation of duties).
  • Run multiple NFS/SMB stacks or multiple config sources (hand-edited exports plus ZFS auto-sharing plus an orchestration system).
  • Use dataset replication across environments (prod ↔ staging) where share rules should not follow.
  • Have mixed OS behavior across your fleet (some nodes honor the properties; others ignore them).

The real trade: state locality vs governance

Share properties put share state next to the data. That’s elegant. It’s also a governance shortcut. The classic ops move is to keep
the “what is exported to whom” policy in one place and treat the filesystem as an implementation detail. Share properties flip that:
the filesystem becomes the policy store.

My opinionated rule: if you have more than one storage host, treat share properties as declarative metadata and have a
separate mechanism decide whether to honor them. You can still use them—but you don’t let a dataset property unilaterally punch holes
in your network.

Joke #1: “Auto-sharing is like auto-renewal subscriptions: convenient until you notice it renewed in a different country.”

The hidden sharp edge: inheritance

ZFS properties inherit down the dataset tree. That’s usually a feature. For sharing, it’s also a trap.
Set sharenfs=on on a parent dataset and you might accidentally export internal administrative datasets, snapshots mounted
elsewhere, or new child datasets created later by automation. People love inheritance until it inherits in the wrong direction.

Another sharp edge: replication and “share drift”

When you replicate datasets, you can replicate properties too. Depending on flags and tooling, the destination may receive
sharenfs/sharesmb values that were meaningful in source but dangerous in destination.
The failure mode is not always “it exports immediately.” Sometimes it’s worse: it sits there as latent configuration until a package
update or service restart suddenly begins honoring the properties. Congratulations, you just shipped a time bomb through replication.

Control doesn’t mean hand-editing everything

The alternative to share properties is not “edit /etc/exports by hand and hope.” Control can be achieved with:

  • Config management generating exports and Samba shares from inventory and policy.
  • A wrapper that reads ZFS properties and generates service configs, but applies an allowlist and review.
  • Namespace separation: one parent dataset where sharing is allowed and all others default to “off.”

You want a system where the default state is boring and safe, and exceptions are deliberate.

A paraphrased idea from Werner Vogels: “You build it, you run it.” In storage terms: if you set the property, you own
the blast radius.

Facts & historical context that actually matters

  1. ZFS sharing started life in Solaris/illumos land. The tight integration with system share services is not an afterthought; it was the intended operator experience.
  2. NFS export configuration predates ZFS by decades. /etc/exports conventions were built for static filesystems, not per-dataset toggles and property inheritance.
  3. SMB on Solaris took a different path than Samba. illumos/Solaris historically shipped an in-kernel SMB server and service tooling; Linux typically relies on Samba userland.
  4. ZFS properties are designed for inheritance, cloning, and replication. That’s why sharing via properties feels natural—and why it can replicate surprises with the data.
  5. “Sharetab” style tracking existed to answer “what is exported?” quickly. Operators wanted a single source of truth for live shares before modern service discovery existed.
  6. NFSv4 changed the mental model. Pseudoroots and a single export point can hide or reveal dataset boundaries in ways that confuse teams that grew up on NFSv3.
  7. ACL semantics diverge between NFS and SMB. Even when both front the same dataset, the permission model translation is not symmetrical and can create “works for SMB, fails for NFS” puzzles.
  8. Idmapping has always been the silent killer. The share can be perfect; UID/GID or SID mapping issues will still make it look “down.”
  9. Linux OpenZFS kept the properties for compatibility. But “property exists” is not equal to “distribution enables auto-share by default.”

Practical tasks: commands, outputs, and decisions (12+)

Task 1: List which datasets claim they are shared

cr0x@server:~$ zfs get -r -o name,property,value,source sharenfs,sharesmb tank
NAME                 PROPERTY  VALUE  SOURCE
tank                 sharenfs  off    default
tank                 sharesmb  off    default
tank/projects        sharenfs  on     local
tank/projects        sharesmb  off    inherited from tank
tank/projects/acme   sharenfs  on     inherited from tank/projects
tank/projects/acme   sharesmb  off    inherited from tank
tank/backups         sharenfs  off    local
tank/backups         sharesmb  off    default

What it means: You see both the value and the SOURCE. Inherited sharing is the first thing to fear.
Decision: If a dataset is shared via inheritance unintentionally, set it explicitly:
zfs set sharenfs=off tank/projects/acme or fix the parent.

Task 2: Verify what the OS thinks is currently exported over NFS

cr0x@server:~$ exportfs -v
/tank/projects  10.20.0.0/16(sync,wdelay,hide,no_subtree_check,sec=sys,rw,root_squash)
/tank/backups   10.30.5.42(sync,wdelay,hide,no_subtree_check,sec=sys,ro,root_squash)

What it means: This is the live NFS export table (Linux NFS server). It may or may not match ZFS properties.
Decision: If zfs get sharenfs says “on” but exportfs doesn’t show it, your auto-share path
isn’t active. If exportfs shows exports you didn’t intend, you have config drift or an overly-helpful generator.

Task 3: Confirm the share service is running (NFS)

cr0x@server:~$ systemctl status nfs-server --no-pager
● nfs-server.service - NFS server and services
     Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled)
     Active: active (exited) since Thu 2025-12-26 09:10:12 UTC; 3h 1min ago
       Docs: man:rpc.nfsd(8)

What it means: NFS is up at the service level. If clients still can’t mount, move to exports, firewall, and idmapping.
Decision: If inactive, don’t debug ZFS properties yet—start the service and confirm ports.

Task 4: Check whether ZFS auto-share tooling is enabled (Linux example)

cr0x@server:~$ systemctl status zfs-share --no-pager
● zfs-share.service - ZFS dataset share service
     Loaded: loaded (/lib/systemd/system/zfs-share.service; enabled)
     Active: active (exited) since Thu 2025-12-26 09:10:10 UTC; 3h 1min ago

What it means: On many Linux setups, this unit is what attempts to honor sharenfs/sharesmb.
Decision: If disabled or missing, your properties are probably “paper configuration.” Decide whether to enable it or
move fully to explicit service configs.

Task 5: Force a re-share after property changes

cr0x@server:~$ sudo zfs share -a
cannot share 'tank/backups': property 'sharenfs' is off
tank/projects shared

What it means: The share command applies the current property state. Some platforms do this automatically on set; some
require an explicit zfs share or service restart.
Decision: If sharing doesn’t apply on property change, bake “re-share” into your runbook and automation.

Task 6: Inspect the exact sharenfs option string (not just on/off)

cr0x@server:~$ zfs get -o name,property,value sharenfs tank/projects
NAME           PROPERTY  VALUE
tank/projects  sharenfs  rw=@10.20.0.0/16,root_squash,sec=sys

What it means: This is where convenience becomes configuration: option strings can express policy.
Decision: If you allow option strings, standardize them. Otherwise teams will invent their own dialects and you’ll
spend weekends translating.

Task 7: Validate dataset mountpoints match what you think you are exporting

cr0x@server:~$ zfs get -o name,property,value mountpoint tank/projects tank/projects/acme
NAME                PROPERTY    VALUE
tank/projects       mountpoint  /tank/projects
tank/projects/acme  mountpoint  /tank/projects/acme

What it means: Exports typically use paths. If mountpoints were changed, renamed, or legacy-mounted, you may export
the wrong path.
Decision: If mountpoints don’t align with your export naming scheme, fix mountpoints or export explicitly by design.

Task 8: Confirm the dataset is actually mounted where you expect

cr0x@server:~$ mount | grep -E '^tank/projects'
tank/projects on /tank/projects type zfs (rw,xattr,posixacl)
tank/projects/acme on /tank/projects/acme type zfs (rw,xattr,posixacl)

What it means: If it’s not mounted, NFS export might exist but point to an empty directory (or fail).
Decision: If unmounted, check canmount, mountpoint, and ZFS mount service ordering.

Task 9: Check ZFS properties that often get confused with sharing

cr0x@server:~$ zfs get -o name,property,value -s local,inherited canmount,readonly,acltype,xattr tank/projects
NAME           PROPERTY  VALUE
tank/projects  canmount  on
tank/projects  readonly  off
tank/projects  acltype   posixacl
tank/projects  xattr     sa

What it means: If readonly=on, writes fail and people blame NFS. If ACL/xattr settings differ, SMB/NFS
behavior diverges.
Decision: Align ACL/xattr expectations with protocol use cases. Don’t “fix” share problems by randomly changing ACLtype.

Task 10: From a client, confirm what the server is offering (NFS)

cr0x@server:~$ showmount -e nfs01
Export list for nfs01:
/tank/projects 10.20.0.0/16

What it means: The client can query export list (mostly NFSv3-era tooling, but still useful).
Decision: If showmount returns nothing, suspect firewall/rpcbind/NFS versions or server-side export not applied.

Task 11: Confirm NFS version and mount options on the client

cr0x@server:~$ sudo mount -t nfs -o vers=4.1 nfs01:/tank/projects /mnt/projects
cr0x@server:~$ nfsstat -m | grep -A3 '/mnt/projects'
/mnt/projects from nfs01:/tank/projects
 Flags:	rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.20.4.18

What it means: You see the negotiated NFS version and sizes. If the mount silently falls back to v3 or uses UDP, that’s
a clue.
Decision: If performance is bad, confirm you’re on TCP and a modern NFS version. If permissions are weird, check sec=.

Task 12: Confirm idmapping and “nobody:nogroup” problems early

cr0x@server:~$ getent passwd 1001
acmebuild:x:1001:1001:ACME Build User:/home/acmebuild:/usr/sbin/nologin
cr0x@server:~$ ls -ln /tank/projects/acme | head
total 3
drwxr-x---  5 1001 1001 5 Dec 26 09:11 build

What it means: Numeric ownership matches an identity. If clients see UID 1001 but server doesn’t, permissions break.
Decision: If getent can’t resolve IDs on either side, fix directory services/SSSD/LDAP before rewriting exports.

Task 13: Check whether SMB shares exist at the service layer (Samba)

cr0x@server:~$ testparm -s | sed -n '1,120p'
Load smb config files from /etc/samba/smb.conf
Loaded services file OK.
Server role: ROLE_STANDALONE

[projects]
	path = /tank/projects
	read only = No
	browsable = Yes

What it means: SMB sharing is controlled by Samba config on Linux. ZFS sharesmb may not be wired in.
Decision: If you expected ZFS to create Samba shares automatically but they don’t appear here, stop assuming. Pick one
control plane and wire it explicitly.

Task 14: Verify live SMB shares from Samba’s view

cr0x@server:~$ smbcontrol all reload-config
cr0x@server:~$ smbclient -L localhost -N | sed -n '1,80p'
	Sharename       Type      Comment
	---------       ----      -------
	projects        Disk
	IPC$            IPC       IPC Service (Samba Server)
SMB1 disabled -- no workgroup available

What it means: You see whether the share is actually being served. “Configured” and “live” are different states.
Decision: If it’s absent, fix Samba config/service first; don’t keep toggling sharesmb hoping it will spawn Samba stanzas.

Task 15: Detect property replication surprises

cr0x@server:~$ zfs get -o name,property,value,source sharenfs,sharesmb -r tank/recv
NAME                 PROPERTY  VALUE  SOURCE
tank/recv            sharenfs  off    default
tank/recv            sharesmb  off    default
tank/recv/projects   sharenfs  on     received
tank/recv/projects   sharesmb  off    received

What it means: SOURCE=received is a giant neon sign: replication delivered this state.
Decision: If this is a non-prod host, set a policy: overwrite received share properties on arrival, or never honor them automatically.

Task 16: Confirm whether a dataset is being “shared” but not actually reachable (firewall/ports)

cr0x@server:~$ sudo ss -lntp | grep -E '(:2049|:111)\s'
LISTEN 0      64          0.0.0.0:2049      0.0.0.0:*    users:(("rpc.nfsd",pid=1432,fd=6))
LISTEN 0      128         0.0.0.0:111       0.0.0.0:*    users:(("rpcbind",pid=812,fd=4))

What it means: NFS and rpcbind are listening. If they aren’t, “exported” doesn’t matter.
Decision: If ports aren’t listening, fix services. If they are listening but clients can’t connect, check network ACLs.

Fast diagnosis playbook: what to check first/second/third

This is the “stop guessing” sequence when someone says “NFS is down” or “SMB share disappeared” after a property change.
The goal is to isolate whether you have a control-plane mismatch, a service-plane issue, or a
data/permission problem.

First: is the dataset supposed to be shared?

  • Run zfs get sharenfs,sharesmb -r for the dataset tree.
  • Look at SOURCE: local vs inherited vs received.
  • Confirm mountpoint and canmount.

If the property is inherited and you didn’t expect it, that’s your bug. Fix the hierarchy, not the symptom.

Second: is the OS actually exporting/serving it?

  • NFS: exportfs -v and ss -lntp | grep 2049.
  • SMB: testparm -s, then smbclient -L localhost -N.
  • If you rely on ZFS auto-share: check systemctl status zfs-share (or equivalent on your platform).

If ZFS says “shared” but the service layer disagrees, you have an integration problem, not a permissions problem.

Third: can a client negotiate and mount?

  • NFS: client mount with explicit vers=4.1 and check nfsstat -m.
  • SMB: attempt a local loopback connection first, then a remote one, to split auth from network.

If local works but remote fails, you’re in firewall/routing/SELinux territory. If remote mounts but access is denied, you’re in
identities/ACLs/export options territory.

Fourth: is performance the complaint, not availability?

  • Check negotiated NFS rsize/wsize and protocol.
  • Check server CPU saturation (SMB encryption/signing can be CPU-heavy).
  • Check ZFS-level health: pool status and latency.

Don’t “tune NFS” until you confirm ZFS isn’t already choking on sync writes, a degraded vdev, or an exhausted ARC.

Three corporate mini-stories from the share-property trenches

Mini-story 1: The incident caused by a wrong assumption

A mid-sized company had two storage nodes: one for production and one for disaster recovery tests. The team replicated datasets nightly.
They used ZFS properties heavily because it made provisioning feel clean: create dataset, set quota, set sharenfs, done.

During a routine DR test, a developer mounted a project share from the DR node “just to check a build artifact.” It mounted. That was
the first surprise. The second surprise was that write access also worked, because the export options were inherited and permissive.

Nobody had intended the DR node to export anything beyond a limited admin subnet. But the replicated datasets had sharenfs
set as a received property. And the DR node had recently been updated, which enabled a sharing unit that began honoring those properties
on boot. No one connected the dots because the change landed in the OS layer, not in the ZFS layer.

The “incident” wasn’t data loss, but it was a governance failure: DR was now a parallel production share target with stale identity
assumptions and looser network controls. The postmortem finding was painfully ordinary: they assumed “properties replicate but don’t do
anything here.” That assumption was true until it wasn’t.

The fix was also ordinary and effective: they added a boot-time job on the DR node that forcibly sets sharenfs=off and
sharesmb=off for all received datasets, then applies an allowlist for the few administrative exports they actually needed.
Not elegant. Correct.

Mini-story 2: The optimization that backfired

Another organization wanted “instant shares” for an internal analytics platform. Their automation created per-team datasets on demand.
To avoid touching /etc/exports for every team, they standardized on setting sharenfs at dataset creation with a
network-based rule, and they enabled automatic share processing.

It worked beautifully for months. Then a new parent dataset was introduced: tank/analytics. Someone set
sharenfs=rw=@10.50.0.0/16 at the parent to reduce boilerplate, expecting children to be more restrictive when needed.
That’s the optimization: lean on inheritance.

A quarter later, a compliance review found that a handful of datasets intended for a restricted subnet were exportable from wider
networks. Not because anyone set them wrong locally, but because those child datasets never overrode the inherited property.
Their creators assumed “no explicit sharenfs means not shared.” In ZFS, “no explicit” often means “inherited.”

The remediation was messy: audit every dataset, explicitly set sharenfs=off on restricted trees, and refactor automation to
require an explicit share declaration at creation time (even if it’s “off”). The lesson was not “inheritance is bad.” The lesson was:
inheritance is a sharp tool, and you don’t hand it to every pipeline without guardrails.

Joke #2: “Inheritance is great—right up until your exports inherit the same enthusiasm as your interns.”

Mini-story 3: The boring but correct practice that saved the day

A large enterprise ran mixed workloads: Linux NFS for compute clusters, SMB for office workflows, and occasional application teams
demanding exceptions. They chose a policy that sounded unsexy: ZFS share properties are allowed, but they are not authoritative.

Concretely: every dataset could carry sharenfs or sharesmb as metadata, but the storage hosts did not auto-apply
them. Instead, a small internal tool ran nightly and on demand, reading those properties, validating them against an allowlist, and then
generating exports/Samba snippets. All changes were reviewed like code.

One day, replication brought in a dataset tree from a partner environment. The received properties included sharenfs=rw=@0.0.0.0/0
(yes, really). If the hosts had auto-shared, it would have become a security incident. Instead, the generator flagged it as invalid,
refused to apply it, and surfaced an alert in the team’s chat.

The team still got the operational convenience of “share intent follows the dataset.” But they kept policy enforcement in one place.
The practice was boring, and it prevented a very exciting night.

Common mistakes: symptoms → root cause → fix

1) “We set sharenfs=on, but clients can’t mount.”

  • Symptom: zfs get sharenfs shows “on”; client mount fails with “access denied” or “no route to host.”
  • Root cause: Auto-share service not enabled, or NFS service not running, or firewall blocks 2049/111.
  • Fix: Verify systemctl status zfs-share (or platform equivalent) and systemctl status nfs-server.
    Confirm listening ports with ss. Then verify exportfs -v shows the path.

2) “A dataset that should be private is exported.”

  • Symptom: exportfs -v lists unexpected export; zfs get shows inherited sharing.
  • Root cause: sharenfs set on a parent dataset; child inherited it silently.
  • Fix: Set sharenfs=off on the parent, or explicitly on the private child datasets. Add an audit check to prevent parent-level sharing unless it’s a deliberate “share boundary.”

3) “After replication, shares behave differently on the destination.”

  • Symptom: Destination unexpectedly exports, or doesn’t export until after an upgrade/reboot.
  • Root cause: Received properties include share settings; destination now honors them due to service changes.
  • Fix: Decide a replication policy: strip/override share properties on receive, or never run auto-share on receivers. Use zfs get ... source to find received.

4) “SMB works but NFS permissions are wrong (or vice versa).”

  • Symptom: Same dataset, different access results by protocol.
  • Root cause: ID mapping mismatch, ACL expectations mismatch, or export options like root_squash misunderstood.
  • Fix: Validate UIDs/GIDs with getent on server and client. Confirm ZFS ACL/xattr properties. Review export options and Samba vfs modules if used.

5) “We turned off sharenfs but it’s still exported.”

  • Symptom: Property says off; exportfs -v still shows export.
  • Root cause: Export exists in /etc/exports or generated config outside ZFS; or unshare wasn’t applied.
  • Fix: Find the authoritative source. Remove the export from explicit configs, then run exportfs -ra. If ZFS orchestrates, run zfs unshare/zfs share -a as appropriate.

6) “Share name collisions or ugly paths show up in SMB browse lists.”

  • Symptom: Duplicate share names, weirdly named shares, or wrong paths.
  • Root cause: Auto-generated SMB shares from dataset names clash with human naming conventions; multiple datasets map to similar share names.
  • Fix: Don’t auto-generate SMB share names from dataset paths unless you control naming strictly. Prefer explicit Samba share stanzas with curated names, and treat sharesmb as metadata only.

7) “Performance tanked after ‘tuning’ share options.”

  • Symptom: Latency spikes, client timeouts, higher server CPU, or angry database logs.
  • Root cause: Enabling sync-heavy semantics without intent (or disabling them without understanding), SMB signing/encryption CPU cost, or bad NFS rsize/wsize negotiation.
  • Fix: Confirm the workload’s durability requirements. Measure server CPU and ZFS latency. Pin NFS version and transport where appropriate. Don’t cargo-cult mount options from the internet.

Operational patterns: how to run this without regret

Pattern A: “Properties as metadata, policy elsewhere” (recommended for most orgs)

You allow teams to set sharenfs/sharesmb properties, but you do not auto-apply them. A controlled process
reads them and generates exports/Samba config after validating:

  • Dataset is in an allowed subtree (e.g., tank/projects yes, tank/system no).
  • Options are from a safe allowlist (no world-open exports, no unexpected security modes).
  • Change is reviewed, audited, and applied predictably.

This pattern keeps the “share intent follows the dataset” benefit without letting one property become a network policy engine.

Pattern B: “One host, one purpose, auto-share is fine”

If you’re running a single storage appliance or a tightly controlled environment where ZFS is the share control plane by design (common
in illumos-based appliances), auto-share can be perfectly sane. But you still need:

  • Strict dataset hierarchy boundaries: only one subtree is allowed to be shared.
  • Explicit off set on sensitive parents to prevent inheritance accidents.
  • Audit jobs to detect unexpected “on” values, especially inherited/received ones.

Pattern C: “Explicit service configs only”

This is the maximal-control approach: you ignore share properties and manage NFS/SMB through service configs and config management.
It’s viable, but you lose some nice introspection. If you take this route, at least use share properties as documentation:
set them to something like off everywhere and store intended exports in CMDB/inventory.

Where teams get burned: mixing patterns

The worst setups are hybrids with no rules:

  • Some exports in /etc/exports, some via ZFS auto-share.
  • Shares created manually in Samba plus ZFS properties that “sometimes” create shares.
  • Automation that sets properties but nobody knows whether they’re applied.

If you remember one thing: pick one authoritative control plane per protocol. Everything else is metadata.

Security stance: assume share properties are not access control

Treat share options like NFS export rules as one layer. Your real security is in network segmentation, host firewalls,
identity management, and permissions/ACLs. If you are depending on “nobody will set sharenfs wrong,” you’ve already lost.

Checklists / step-by-step plan

Checklist 1: Safely introducing sharenfs in an existing environment

  1. Inventory current exports: capture exportfs -v output and store it.
  2. Inventory ZFS share properties: zfs get -r sharenfs,sharesmb -o name,property,value,source.
  3. Define a share boundary: pick one parent dataset that is allowed to have shares beneath it.
  4. Set safe defaults: set sharenfs=off and sharesmb=off on parents that must never share.
  5. Decide control plane: auto-share on hosts or generator tool or explicit configs only.
  6. Implement auditing: alert on sharenfs=on outside the boundary, and alert on source=received.
  7. Test with one dataset: enable share, confirm it appears in service layer, mount from a test client.
  8. Roll out gradually: migrate exports one-by-one; avoid “flip parent property and pray.”

Checklist 2: Handling replication to a non-authoritative environment (DR/staging)

  1. On receiver, disable auto-share unless you have a good reason to enable it.
  2. After every receive, scan for received share props: zfs get -r -o name,property,value,source sharenfs,sharesmb.
  3. Enforce policy: automatically set sharenfs=off/sharesmb=off where shares are not allowed.
  4. Document exceptions: the few datasets that must be exportable in DR (usually none) should be explicit and reviewed.

Checklist 3: Debugging a reported outage (NFS/SMB)

  1. Confirm intent: is the dataset supposed to be shared? Check property and inheritance.
  2. Confirm mount state: dataset mounted, correct mountpoint.
  3. Confirm service layer: exportfs -v or smbclient -L from server.
  4. Confirm network reachability: listening ports and firewall rules.
  5. Confirm identity mapping: getent, ownership, ACL behavior.
  6. Only then tune performance or touch arcane options.

FAQ

1) Should I use sharenfs/sharesmb in production?

Yes, but only if you are explicit about the control plane. For multi-host environments, use them as metadata and apply policy through a
controlled generator or allowlisted automation.

2) Why does “zfs set sharenfs=on” not immediately export on my Linux host?

Because the property is not the export engine. You need the integration layer (often a systemd unit like zfs-share) and an
NFS server running. Without that, it’s just a stored property.

3) Can I express NFS export options inside sharenfs?

Often yes, depending on platform integration. But once you do, you’ve created a second exports language. Standardize option strings, or
forbid free-form options and only allow “on/off” plus centrally managed policy.

4) How do I tell if a share property is inherited or received?

Use zfs get -o name,property,value,source sharenfs,sharesmb. If source is inherited or
received, treat it as higher risk and validate it.

5) Does turning off sharenfs guarantee the dataset is not exported?

No. It only affects the ZFS-controlled export path. The dataset could still be exported through explicit /etc/exports or
other tooling. Always confirm with exportfs -v.

6) Is sharesmb a good idea on Linux with Samba?

Usually not as an authoritative control plane, because Samba is configured through its own files and tooling. Use sharesmb
as a hint/metadata unless you have a proven integration that generates Samba config reliably and safely.

7) What’s the biggest security risk with share properties?

Inheritance and replication. A parent property or a received property can export more than intended, and it can become active later
when services change. Audit for inherited/received sharing and enforce boundaries.

8) How do I prevent accidental sharing of new child datasets?

Don’t set sharing “on” at a broad parent unless that parent is a deliberate export boundary. Prefer explicit sharing per dataset, and
set sharenfs=off on sensitive parents as a guardrail.

9) Why does SMB browsing show odd shares even when users can’t access them?

Browsing reflects advertised share configuration, not permission success. You may be publishing shares automatically without matching
ACLs/identity mapping. Fix the share definition and the permission model; don’t rely on “they can’t access it anyway.”

10) What’s the simplest safe approach for a small team?

Keep sharenfs/sharesmb off by default, and manage NFS/SMB explicitly. If you later adopt properties, do it only
within one dedicated subtree and with an audit job.

Conclusion: next steps you can do this week

  1. Audit now: run zfs get -r sharenfs,sharesmb -o name,property,value,source and find inherited/received surprises.
  2. Confirm the truth: compare ZFS intent with live service state using exportfs -v and Samba tooling.
  3. Pick a control plane: either “ZFS auto-share is authoritative” (rare) or “service configs are authoritative” (common) or “properties are metadata + generator” (best at scale).
  4. Draw boundaries: decide which dataset subtree is allowed to be shared, and lock down the rest explicitly.
  5. Write the runbook: include the fast diagnosis steps and the re-share/reload actions your platform requires.

ZFS gives you elegant primitives. Share properties can be part of that elegance, but production doesn’t reward elegance unless it’s also
legible, audited, and boring under stress. Choose convenience where it’s reversible. Choose control where it isn’t.

← Previous
Ubuntu 24.04: “dpkg was interrupted” — the safe repair sequence (no roulette)
Next →
Debian 13: SSHFS vs NFS — pick the one that won’t randomly hang (and configure it right) (case #21)

Leave a comment