Debian 13: Permission denied on bind mounts — fix UID/GID mapping the clean way (case #41)

Was this helpful?

You upgrade a host to Debian 13, redeploy a container, and suddenly your “perfectly normal” bind mount turns into a brick:
Permission denied on read, write, mkdir, sometimes even on ls. The logs don’t help. Your colleagues suggest
chmod -R 777 like it’s a vitamin supplement.

This is usually not a “permissions” problem. It’s an identity problem. Bind mounts don’t translate ownership; they faithfully expose
the host filesystem to whatever process is inside the target namespace. If the UID/GID mapping doesn’t line up, you get denial—correctly.
The clean fix is to align UID/GID mapping (or deliberately translate it with idmapped mounts), not to start a chmod fire in production.

Fast diagnosis playbook

When the page wakes you up at 02:13, you don’t need philosophy. You need a short, reliable sequence that finds the bottleneck quickly.
Here’s the fastest path I’ve found to the truth on Debian 13 hosts running containers and/or systemd mounts.

1) Confirm what is failing: kernel permissions, namespace mapping, or MAC?

  • First: reproduce on host vs inside the container. If it works on host but fails in container, it’s almost always mapping or LSM.
  • Second: check ownership numeric IDs on the mount path (stat), not names. Names lie; numbers don’t.
  • Third: check whether you’re rootless, userns-remapped, or using idmapped mounts. Mapping determines whether UID 1000 means “you” or “someone else”.
  • Fourth: quickly eliminate AppArmor/SELinux as a confounder (even if you “don’t use it”).

2) Decide which fix class you need

  • Same host, same container: align UID/GID inside container to match host files, or map host IDs intentionally using idmapped mounts.
  • Multi-host / NFS: standardize numeric UIDs/GIDs via directory/SSSD or explicit UID assignment; don’t “fix” each node by hand.
  • Kubernetes hostPath: treat it as “the node’s filesystem”. Use consistent IDs, fsGroup, or a controlled init container, not random chmods.
  • Rootless: check /etc/subuid and /etc/subgid, and verify the shifted mapping covers the IDs you need.

3) Validate with one definitive test

Create a file from inside the container and verify its UID/GID on the host. Then reverse it: create on host and write from container.
If the IDs don’t match expectations, stop adjusting mode bits. You’re debugging identity, not access bits.

What bind mounts really do (and what they never will)

A bind mount is brutally literal. It takes a directory (or file) from one place in the filesystem tree and makes it appear somewhere else.
That’s it. No ownership translation. No permission mediation beyond what the kernel already enforces. No friendly rename of users and groups
just because you moved the path under /var/lib/containers.

This literalness is why bind mounts are fast and predictable. It’s also why bind mounts become a recurring incident generator when you mix:
rootless containers, user namespaces, multi-host storage, and “helpful” defaults from orchestrators.

The mental model that prevents bad fixes

The kernel decides access using numeric IDs (UID/GID), mode bits, ACLs, and LSM policy. A process inside a container is still a process.
If that process has UID 1000 inside its namespace, the kernel may treat it as UID 100000 outside, depending on the mapping. If the directory
on the host is owned by UID 1000, and the process is effectively UID 100000, it is not the owner. The write fails. Correctly.

If you “fix” this by chmod -R 777, you didn’t solve the identity mismatch. You just gave up on security and made future
debugging worse. Also, your auditors will start using words like “systemic”.

Joke #1: “chmod 777” is like turning off the smoke alarm because it’s loud. You’ll sleep better right up until you don’t.

Interesting facts & history that explain today’s pain

  1. UIDs and GIDs are numbers first. Usernames are lookup sugar layered on top (traditionally via /etc/passwd or NSS).
  2. Bind mounts arrived long before containers. They were a filesystem trick for chroots and layout changes; containers inherited them as a volume primitive.
  3. User namespaces are relatively “new Linux”. They existed for years before becoming mainstream, and some distros enabled them cautiously due to security history.
  4. Rootless containers made the mismatch visible. In rootful mode, UID 0 often bulldozes through problems (until it hits NFS root-squash or LSM rules).
  5. NFS has always punished sloppy identity management. For classic NFS, numeric UID/GID must match across clients or you get “works on my node” ownership chaos.
  6. idmapped mounts are a kernel feature, not a container feature. They let the kernel translate ownership per-mount without changing on-disk ownership.
  7. ACLs can override “looks fine” mode bits. A directory can be drwxrwx--- and still deny access due to an ACL entry you forgot existed.
  8. Overlay filesystems complicate perceived ownership. With layered filesystems, you might see one ownership in the container layer and another on the host backing store.

The core cause: UID/GID mismatch across namespaces

The recurring pattern behind “Permission denied on bind mounts” is simple:
the process’s effective UID/GID as seen by the kernel does not own the files it’s trying to access.
And bind mounts don’t change that relationship—they expose it.

Where Debian 13 gets blamed (often unfairly)

Debian 13 is not “breaking bind mounts”. What changes in modern systems is the prevalence of:
rootless Podman/Docker, systemd mount units, stricter defaults, and user namespace setups. Any small drift in UID/GID coordination
becomes visible when you upgrade components and redeploy.

Typical mismatch scenarios

  • Rootless container + host directory owned by your real UID. The container’s “UID 0” maps to a high host UID range (from /etc/subuid).
  • Different UID allocation across hosts. User app is UID 1001 on node A and UID 1002 on node B. NFS happily stores only numbers.
  • Container image user not matching host service user. Image runs as UID 1000, host directory owned by UID 2000, and you assumed names mattered.
  • Rootful container + NFS root-squash. “But it’s root in the container” meets “root becomes nobody on the server”.
  • ACLs or LSM policy deny despite correct mode bits. You can stare at ls -l all day and never see an AppArmor deny.

Reliability engineering quote (paraphrased idea): Werner Vogels has said you build systems expecting failure, not pretending it won’t happen.
Identity mismatches are a form of “expected failure” in multi-tenant Linux; design for it.

Practical tasks: commands, outputs, and decisions (12+)

These are the commands I actually run. Each task includes: the command, what the output tells you, and the decision you make from it.
Assume the bind-mounted path in question is /srv/app/data on the host and appears as /data in the container.

Task 1: Confirm it’s a bind mount (not a different filesystem)

cr0x@server:~$ findmnt -T /srv/app/data -o TARGET,SOURCE,FSTYPE,OPTIONS
TARGET        SOURCE         FSTYPE OPTIONS
/srv/app/data /dev/sdb1[/data] ext4   rw,relatime

Meaning: You’re on ext4 and not looking at an NFS/overlay surprise.
Decision: If this shows nfs, jump to NFS identity checks. If it shows overlay, treat it as container-layer behavior.

Task 2: Inspect mount propagation and bind-ness explicitly

cr0x@server:~$ findmnt -o TARGET,SOURCE,FSTYPE,OPTIONS,PROPAGATION | grep -E '(/srv/app/data|/data)'
/srv/app/data  /dev/sdb1[/data] ext4  rw,relatime shared

Meaning: Propagation can affect whether mounts appear inside containers.
Decision: If propagation is private and you rely on nested mounts, you’ll get “missing data” issues, not permission denied. Different fire.

Task 3: Check numeric ownership and mode bits (host)

cr0x@server:~$ stat -c 'path=%n uid=%u gid=%g mode=%a owner=%U group=%G' /srv/app/data
path=/srv/app/data uid=2000 gid=2000 mode=750 owner=app group=app

Meaning: The kernel will require UID 2000 or GID 2000 (or ACL) for writes.
Decision: If your container runs as UID 1000, it’s going to fail. Fix mapping, don’t chmod.

Task 4: Identify the process identity inside the container

cr0x@server:~$ podman exec -it appctr sh -lc 'id && umask'
uid=1000(app) gid=1000(app) groups=1000(app)
0022

Meaning: Inside the container, the process is UID/GID 1000.
Decision: You must align this with host ownership (UID 2000) or translate with idmapped mounts.

Task 5: Prove the failure with a controlled write (container)

cr0x@server:~$ podman exec -it appctr sh -lc 'touch /data/.perm_test && echo ok'
touch: cannot touch '/data/.perm_test': Permission denied

Meaning: It’s not your app; it’s filesystem permissions. Good—now you can stop blaming the code.
Decision: Continue to mapping diagnostics instead of app logs.

Task 6: Compare with a host-side write as the expected user

cr0x@server:~$ sudo -u app touch /srv/app/data/.host_test && ls -ln /srv/app/data/.host_test
-rw-r--r-- 1 2000 2000 0 Dec 30 12:01 /srv/app/data/.host_test

Meaning: UID 2000 works, and files are owned as expected.
Decision: Either run the container as UID 2000, or map its UID to 2000.

Task 7: If rootless, inspect subordinate ID ranges

cr0x@server:~$ grep -E '^cr0x:' /etc/subuid /etc/subgid
/etc/subuid:cr0x:100000:65536
/etc/subgid:cr0x:100000:65536

Meaning: Rootless user namespaces map container IDs into 100000+ on the host.
Decision: If the host directory is owned by UID 2000, a rootless container mapped to 100000 won’t match. You need idmapped mount or host ownership change.

Task 8: Inspect the user namespace mapping of the container process

cr0x@server:~$ pid=$(podman inspect -f '{{.State.Pid}}' appctr); echo "$pid"; sudo cat /proc/$pid/uid_map
24188
         0     100000      65536

Meaning: Container UID 0 is host UID 100000; container UID 1000 is host UID 101000.
Decision: Stop trying to chown the mount from inside the container; it won’t land on the host IDs you think it will.

Task 9: Check for ACLs that override mode bits

cr0x@server:~$ getfacl -p /srv/app/data | sed -n '1,20p'
# file: /srv/app/data
# owner: app
# group: app
user::rwx
group::r-x
other::---

Meaning: No ACL surprises here.
Decision: If you see an ACL denying writes, fix ACLs; don’t touch UID/GID yet.

Task 10: Check AppArmor denials (common on Debian)

cr0x@server:~$ sudo journalctl -k -g DENIED --no-pager | tail -n 3
Dec 30 12:03:14 server kernel: audit: type=1400 apparmor="DENIED" operation="open" class="file" profile="containers-default" name="/srv/app/data/" pid=24188 comm="myapp"

Meaning: This is not UID/GID. AppArmor is blocking.
Decision: Fix the AppArmor profile or label; don’t churn ownership. If there are no denials, go back to mapping.

Task 11: Confirm the mount options (nosuid/nodev/noexec are not “permission denied”, but can look like it)

cr0x@server:~$ findmnt -T /srv/app/data -o OPTIONS
OPTIONS
rw,relatime

Meaning: No noexec weirdness.
Decision: If your app fails to execute binaries from the mount, and you see noexec, that’s your fix.

Task 12: Verify the container’s effective user matches expectations (rootful vs rootless)

cr0x@server:~$ podman info --format '{{.Host.Security.Rootless}}'
true

Meaning: You are rootless. That’s great for safety and terrible for assumptions.
Decision: Plan fixes that work with userns mapping, not against it.

Task 13: Reproduce ownership mismatch with numeric listing (container)

cr0x@server:~$ podman exec -it appctr sh -lc 'ls -ldn /data; ls -ln /data | head'
drwxr-x--- 2 2000 2000 4096 Dec 30 12:01 /data
-rw-r--r-- 1 2000 2000 0 Dec 30 12:01 .host_test

Meaning: The container sees UID 2000 on files, but it runs as UID 1000. That gap is the problem.
Decision: Choose between: (a) change container user to 2000, (b) change host ownership to mapped ID, (c) idmapped mount.

Task 14: Confirm NFS root-squash when applicable

cr0x@server:~$ findmnt -T /srv/app/data -o FSTYPE,SOURCE
FSTYPE SOURCE
nfs4   nfs01:/exports/appdata
cr0x@server:~$ sudo -u root touch /srv/app/data/.root_touch 2>&1 | tail -n 1
touch: cannot touch '/srv/app/data/.root_touch': Permission denied

Meaning: Likely root-squash or server-side export policy. Root on the client isn’t root on the server.
Decision: Fix identity on the NFS server (UID/GID alignment), or adjust export options deliberately. Don’t run everything as root to “fix” it.

Clean fixes you can defend in a review

There are three “good” families of solutions. Pick based on your operational reality: single host vs fleet, rootless vs rootful,
compliance requirements, and whether storage is local or networked.

Fix family A: Align the container runtime UID/GID with the host data

This is the simplest and most boring fix: if /srv/app/data is owned by UID 2000, run the container process as UID 2000.
It works across bind mounts, across most filesystems, and doesn’t require fancy kernel features.

With images that support it, set the runtime user explicitly. In Compose-like workflows you’d set user: "2000:2000".
For Podman:

cr0x@server:~$ podman run -d --name appctr --user 2000:2000 -v /srv/app/data:/data:Z docker.io/library/alpine sleep 1d
...output...

Meaning: The process runs with the same UID/GID that owns the host directory.
Decision: Use this when you control the application’s UID and don’t need “container UID 0” semantics.

The catch: if you run across multiple hosts, you must keep UID 2000 consistent everywhere. That’s not optional. It’s the whole point.

Fix family B: Standardize UID/GID across the fleet (directory-backed identity)

This is what grown-up environments do: assign service accounts fixed numeric IDs, manage them centrally, and make storage consistent.
Local users created ad-hoc on each host will drift. Drift becomes outages.

If you can’t centralize, at least reserve a UID/GID range for service identities and enforce it in provisioning.
“We’ll remember to create user app with the same UID on every server” is not a process; it’s a prayer.

Fix family C: Use idmapped mounts to translate ownership cleanly

idmapped mounts are the closest thing Linux has to “bind mount but translate IDs”. They let you mount a directory so that ownership
is mapped for that mount only—no changing on-disk ownership, no global UID renumbering.

This is especially attractive when:

  • You have existing data owned by one UID/GID, but the container image expects another.
  • You can’t chown (large datasets, compliance, shared mounts).
  • You’re running rootless and want the container to see “reasonable” ownership.

The exact plumbing varies by runtime and kernel support. The principle stays the same: define a mapping and apply it to the mount.
On modern Debian kernels, idmapped mounts are viable, but don’t assume every stack layer (systemd, container runtime, orchestration) will
do it for you automatically.

What not to do (unless you like recurring incidents)

  • Don’t chmod 777 a shared data directory “to make it work”. You’ll break least privilege and still have identity drift.
  • Don’t chown terabytes of data during an incident unless you’ve measured it and can roll back. It’s slow, disruptive, and easy to get wrong.
  • Don’t “fix” by running everything as root. NFS will laugh, LSM will complain, and you’ll have new problems.

Joke #2: Every time someone suggests “just run it privileged”, a security engineer grows another grey hair and names it after you.

Three corporate-world mini-stories

1) Incident caused by a wrong assumption: “names match, so IDs must match”

A mid-sized company ran a stateful service in containers across a handful of Debian hosts. The service account was called app
everywhere. The team assumed that meant the same identity everywhere. It didn’t.

One node had been installed earlier and had a few extra local users. On that box, app ended up as UID 1002. On newer nodes,
it was UID 1001. The storage was an NFS export mounted at /srv/app/data, and NFS stored numeric ownership, not “the vibe of the username”.

The failure mode was subtle: the service worked on two nodes and failed on one with Permission denied when writing new files,
but could read old ones. Engineers spent hours poking at container settings, then at NFS mount options, then at “maybe Debian changed something”.

The breakthrough came from one person doing the unglamorous thing: ls -ln on the data directory from each node, plus id app.
They saw the mismatch instantly. The fix was equally boring: standardize UID/GID for app via their provisioning pipeline and
correct ownership on the NFS server side for the intended UID.

The lesson: user names are a UI. Numeric IDs are the contract.

2) Optimization that backfired: “avoid chown by switching to rootless”

Another org wanted to reduce the blast radius of container escapes. They flipped a chunk of their workloads to rootless containers.
Great move overall. But they treated it like a security-only change and didn’t run a storage compatibility check.

Their bind mounts pointed to host directories owned by service users with UIDs in the low thousands. Rootless mapping shifted container IDs
into the 100000+ range. Suddenly, workloads that previously wrote to /srv started failing. The team’s first reaction was to “optimize”:
remove group checks by widening permissions on directories. That made some workloads work and made audit findings worse.

They then tried a “quick” fix: recursively chown the host directories to the mapped host UID range. That worked for the containers but broke
a set of host-side maintenance tools that assumed the service user owned its own data. Now they had two universes of ownership, neither happy.

The eventual stable solution was to stop treating identity as incidental. They defined explicit UID/GID strategy for services, documented which
workloads were rootless, and used a combination of running containers with aligned UIDs and selective idmapped mount usage where legacy data
couldn’t move. The “optimization” was not rootless; it was the governance around it.

3) Boring but correct practice that saved the day: preflight checks in CI for storage identity

A larger platform team had a habit that looked silly until it wasn’t: every deployment pipeline had a preflight job that ran a container
with the intended runtime user, mounted the intended host path, and performed basic filesystem ops (mkdir, touch, fsync, rename).

The job produced a small artifact: the UID/GID seen by the container, the UID/GID on the host for the created file, and any kernel/LSM denials.
It wasn’t fancy. It was consistent. People occasionally complained it added minutes and “never finds anything”.

Then a wave of nodes got rebuilt with a slightly different identity provisioning order. A few service accounts shifted UID assignment. The preflight
caught it before production rollout. The fix was made in provisioning, not during an outage. No midnight calls, no emergency chmod, no chown of shared storage.

This is what reliability looks like: not heroism, just a process that makes the failure boring and early.

Common mistakes: symptom → root cause → fix

1) Symptom: Works on host, fails in container with “Permission denied”

Root cause: Container UID/GID doesn’t match host ownership, often due to rootless userns mapping.
Fix: Run container as the host directory owner UID/GID, or use idmapped mounts, or adjust host ownership to the mapped ID intentionally.

2) Symptom: Rootful container still can’t write to NFS bind mount

Root cause: NFS root-squash or server-side permissions. Root in container is not special on the server.
Fix: Align numeric IDs across clients and server; adjust NFS exports deliberately; avoid “just run as root”.

3) Symptom: ls -l shows write permission, but writes fail

Root cause: ACL denies or LSM policy denies (AppArmor/SELinux).
Fix: Inspect ACLs with getfacl; inspect denials in kernel logs; fix policy/profile rather than chmod/chown.

4) Symptom: Files created in container appear on host owned by weird high UIDs (100000+)

Root cause: Rootless user namespace mapping using /etc/subuid//etc/subgid.
Fix: Expect it; don’t fight it. Use aligned IDs via idmapped mount or set the application’s runtime UID strategy properly.

5) Symptom: Some nodes work, others fail, same deployment

Root cause: UID/GID drift across hosts; NFS or shared storage makes it visible.
Fix: Centralize identity or enforce fixed numeric IDs in provisioning; verify with id and ls -ln across nodes.

6) Symptom: Changing permissions “fixes” it but breaks later

Root cause: You masked identity mismatch by widening permissions; later, another service or security control expects tighter modes.
Fix: Revert to least privilege; fix identity mapping; add a preflight test that validates writeability under the correct UID/GID.

Checklists / step-by-step plan

Step-by-step plan for a single Debian 13 host (most common)

  1. Confirm the failing path and mount type.
    Use findmnt -T. If it’s NFS, skip ahead to the NFS plan.
  2. Record host ownership numerically.
    Use stat -c 'uid=%u gid=%g mode=%a' on the host path.
  3. Record container runtime identity.
    Inside container: id. Also capture whether you’re rootless.
  4. Check userns mapping (rootless especially).
    Inspect /proc/$pid/uid_map. If UIDs are shifted, assume mismatch until proven otherwise.
  5. Eliminate ACL and LSM denials.
    Run getfacl and check journalctl -k -g DENIED.
  6. Choose a fix:

    • If you control the container user: run container as host UID/GID.
    • If you can’t change the container user: idmapped mount (preferred) or controlled ownership change.
    • If this is a shared environment: standardize IDs rather than patching one host.
  7. Validate with a two-way write test.
    Create a file from container and verify its host UID/GID; then create from host and write from container.
  8. Make it stick.
    Put UID/GID decisions into config management, not tribal knowledge.

Step-by-step plan for NFS-backed bind mounts

  1. Identify the NFS server and export policy. Confirm root-squash expectations.
  2. Verify numeric UID/GID consistency across all clients. Same service account must have same UID everywhere.
  3. Fix identity at the source. Directory-backed identity or consistent provisioning beats local hacks.
  4. Avoid chown storms. If ownership correction is needed, do it server-side in a controlled window.
  5. Retest with real application UID. “Works as root” is not a success criterion on NFS.

Step-by-step plan for rootless containers on Debian 13

  1. Confirm rootless mode. podman info or runtime equivalent.
  2. Check /etc/subuid and /etc/subgid. Ensure ranges exist and are large enough.
  3. Inspect uid_map/gid_map. Know what container IDs become on the host.
  4. Pick an ownership strategy. Either data is owned by mapped host IDs, or you use idmapped mounts to present expected IDs.
  5. Document it. Rootless without identity documentation is an outage subscription.

FAQ

Why did this show up after moving to Debian 13?

Usually because you changed one layer: container runtime, rootless defaults, kernel feature set, systemd behavior, or identity provisioning order.
The underlying rule didn’t change: access is enforced by numeric IDs and policy.

Why does ls -l show a username I recognize, but the container can’t write?

Because the container’s process UID doesn’t match the numeric UID owning the directory. Usernames are resolved via NSS and can differ by environment.
Always check numeric IDs with ls -ln or stat.

Can I just chown -R the directory to match the container?

You can, but it’s often the worst option: slow on large trees, risky on shared data, and it bakes in accidental mappings.
Prefer aligning runtime UID/GID or idmapped mounts when you need translation without rewriting ownership.

What’s the cleanest fix when multiple services share the same host path?

Standardize service account UIDs/GIDs across the fleet and assign group ownership thoughtfully.
If services truly need different ownership views, idmapped mounts are a better tool than chmod chaos.

Why does rootless create files with UIDs like 101000 on the host?

Because user namespaces map container IDs into a subordinate range defined in /etc/subuid//etc/subgid.
Container UID 1000 becomes host UID (subuid base + 1000).

Does AppArmor really cause “Permission denied” that looks like a filesystem issue?

Yes. It can deny open/create operations even when mode bits look permissive. Check kernel logs for denials.
If you see AppArmor denies, fix the profile; don’t churn ownership.

What about Kubernetes hostPath volumes?

Same principle: the pod’s process identity must be able to access the node path. Use explicit runAsUser/runAsGroup,
or a controlled init step, or a storage class that avoids hostPath entirely for stateful data.

Is fsgroup enough?

Sometimes. If group permissions are correctly set (g+rwx) and the process is in that group, it works.
But it won’t fix owner-only directories and it won’t help if user namespace mapping prevents the group ID from matching.

How do I prove it’s UID/GID mapping and not “Debian permissions being weird”?

Use a two-way test: create a file inside the container and inspect its numeric owner on the host. Then reverse it.
If the IDs don’t align, the verdict is mapping mismatch.

When should I use idmapped mounts instead of changing container user?

Use idmapped mounts when you can’t change the application UID (vendor image), can’t change on-disk ownership (shared or huge data),
or need multiple views of the same data with different ownership semantics.

Next steps that won’t haunt you

“Permission denied” on a bind mount is rarely a mystery and almost never fixed by making permissions more permissive.
Treat it like an identity mapping problem until proven otherwise.

Do this next, in order:

  1. Capture numeric UID/GID of the host path and the container process.
  2. Confirm whether user namespaces are shifting IDs (rootless is the usual suspect).
  3. Eliminate ACL and AppArmor denials with targeted checks.
  4. Pick a clean fix: align runtime UID/GID, standardize IDs across hosts, or use idmapped mounts for translation.
  5. Codify it in provisioning and CI with a preflight write test.

If you do only one thing: stop trusting usernames when debugging storage permissions. Numeric IDs are the truth, and bind mounts are honest to a fault.

← Previous
DLSS explained: the biggest FPS hack of the decade
Next →
ZFS Datasets Per Workload: The Management Trick That Prevents Chaos

Leave a comment