You bind-mount a host directory into a container. Inside the container, it’s empty. You double-check the path. You try a different shell. You restart Docker. You begin bargaining with the universe. The host directory is full of files, yet the container insists it’s a pristine void.
This is often not a permissions problem, not SELinux, not “Docker being Docker.” It’s mount propagation: how mount events (new mounts) do—or don’t—flow between the host and a container’s mount namespace. It’s subtle. It’s Linux. It’s fixable.
The exact symptom and why it fools smart people
Let’s define the “bind mount looks empty” failure mode precisely, because “empty” can mean three different things:
- Truly empty: inside the container you see an empty directory, while the host has files.
- Missing submount contents: the directory has files, but a mounted filesystem under it (like
/mnt/dataor/var/lib/kubelet/pods) shows up as empty or as a plain directory. - Stale view: the container sees old contents, but mounts made after container start don’t appear.
The first case is usually permissions, UID/GID mismatch, or SELinux/AppArmor. But the second and third are classic mount propagation bugs. You mounted something under the bind mount on the host, and the container didn’t get the memo.
The brain trap: bind mounts feel like “a window to the host.” They’re not. They’re a reference to a path, resolved in a mount namespace, with specific propagation semantics. Linux is doing exactly what you asked, not what you meant.
Short joke #1: If you ever want to feel powerful, create a mount namespace; you can make / mean whatever you want. It’s like time travel, but with more mount flags.
Interesting facts and history (short, concrete, useful)
- Mount propagation entered mainline Linux in the 2.6 era as part of the larger effort to support containers: separate mount namespaces plus controlled sharing between them.
- The “shared subtree” feature exists because of recursive mounts. A simple bind mount is not enough when you want mount events under a directory to appear elsewhere.
- Docker originally leaned heavily on Linux defaults. Those defaults vary by distro and by init system configuration, which is why the same Docker run command behaves differently on different hosts.
- Systemd changed the game. Many systemd-managed systems default
/(or key mount points) toshared, affecting how mounts propagate into services and containers. - Kubernetes exposes mount propagation explicitly (
mountPropagation: HostToContainer, etc.) because storage plugins and CSI drivers hit these problems constantly. - “rshared” is not “more permissive permissions.” It’s about mount events, not file access. You can have perfect permissions and still see empty submounts.
- OverlayFS made container root filesystems cheap but didn’t remove the need to understand mounts; it made it easier to forget you’re stacking filesystems.
- Bind mounts can mask mounts. If you bind-mount a directory over an existing mount point, you hide what was mounted there. Sometimes “empty” is “you mounted over it.”
- Container runtimes choose safety defaults. Many default to
rprivatefor bind mounts to avoid containers influencing the host mount table.
Mount propagation: what’s really happening
Linux has a concept called a mount namespace. Each namespace has its own view of the mount table: what is mounted where, and with which flags. Containers use this so a container can mount things without scribbling over the host’s mount table (and vice versa).
But mounts aren’t just a static table. They’re events. A process can mount a filesystem at /mnt/thing after your container has started. Whether that new mount appears inside the container depends on mount propagation between the two namespaces.
Shared, private, slave: the three personalities
Every mount point has a propagation type. In practice you’ll see:
- private: mount/unmount events under this mount do not propagate to peers.
- shared: mount/unmount events propagate to all mounts in the same peer group.
- slave: receives propagation from a master shared mount, but does not send events back.
Then there’s the recursive versions:
- rshared, rprivate, rslave: apply the rule to the mount and everything under it.
Here’s the gotcha: Docker bind mounts are often created with a propagation mode that’s not recursive shared. So if the host mounts something under the source path later, the container won’t see it.
A common real-world pattern that triggers the bug
You do something like:
- Bind-mount
/srvinto the container as/srv. - On the host, later mount an NFS share at
/srv/data(or systemd mounts it on demand). - Inside the container,
/srv/datalooks empty or like an ordinary directory, because the submount didn’t propagate.
That’s not a file visibility problem. It’s a submount visibility problem.
One quote, because ops people collect these like scars
Everything fails all the time.
— Werner Vogels
Fast diagnosis playbook
If a bind mount looks empty, you can burn an afternoon—or you can run this checklist in five minutes and know where the body is buried.
- Confirm what kind of “empty” it is. Are files missing, or only submount contents?
- Check whether the host path contains a mount point under it. If yes, suspect propagation immediately.
- Inspect the container’s mount propagation settings. Look for
Propagationin Docker inspect output. - Compare mount namespace views. Use
nsenterto look at mounts as the container sees them. - Decide: change propagation, change architecture, or stop mounting under that path. There’s a “correct” fix and a “stop the bleeding” fix. Pick consciously.
Shortcut: if you mount things dynamically under a directory and you also bind-mount that directory into containers, you almost always want rshared or rslave. Default rprivate is safe but surprising.
Hands-on tasks: commands, outputs, decisions (12+)
Task 1: Reproduce the bug in a controlled way
cr0x@server:~$ sudo mkdir -p /srv/demo/submount
cr0x@server:~$ echo "host-file" | sudo tee /srv/demo/host.txt
host-file
cr0x@server:~$ docker run --rm -d --name mp-test -v /srv/demo:/demo alpine sleep 100000
b8a3b2c55d9f8e8a2c5a2b0f0d2f0b6d9a5f9d0c8b7a0d1c2b3a4f5e6d7c8b9
cr0x@server:~$ docker exec mp-test ls -la /demo
total 8
drwxr-xr-x 3 root root 4096 Jan 3 12:00 .
drwxr-xr-x 1 root root 4096 Jan 3 12:00 ..
-rw-r--r-- 1 root root 10 Jan 3 12:00 host.txt
drwxr-xr-x 2 root root 4096 Jan 3 12:00 submount
What it means: baseline bind mount works. The container sees existing files.
Decision: proceed to mount something under /srv/demo/submount on the host and see if the container inherits it.
Task 2: Create a mount under the bind-mounted directory (host side)
cr0x@server:~$ sudo mount -t tmpfs tmpfs /srv/demo/submount
cr0x@server:~$ echo "mounted-file" | sudo tee /srv/demo/submount/mounted.txt
mounted-file
cr0x@server:~$ mount | grep "/srv/demo/submount"
tmpfs on /srv/demo/submount type tmpfs (rw,relatime)
What it means: there is now a submount under the directory you bound into the container.
Decision: check whether the container sees mounted.txt. If not, it’s propagation.
Task 3: Check visibility inside the container
cr0x@server:~$ docker exec mp-test sh -lc "mount | grep /demo/submount || true; ls -la /demo/submount; cat /demo/submount/mounted.txt 2>/dev/null || echo 'no mounted.txt'"
total 8
drwxr-xr-x 2 root root 4096 Jan 3 12:00 .
drwxr-xr-x 3 root root 4096 Jan 3 12:00 ..
no mounted.txt
What it means: the container doesn’t see the tmpfs submount; it sees the old directory instead.
Decision: confirm propagation mode and fix with rshared/rslave depending on your safety needs.
Task 4: Inspect the bind mount’s propagation setting
cr0x@server:~$ docker inspect mp-test --format '{{json .Mounts}}'
[{"Type":"bind","Source":"/srv/demo","Destination":"/demo","Mode":"","RW":true,"Propagation":"rprivate"}]
What it means: propagation is rprivate. New mounts under /srv/demo won’t show up in the container.
Decision: switch to rshared or rslave on this mount.
Task 5: Re-run container with rshared propagation
cr0x@server:~$ docker rm -f mp-test
mp-test
cr0x@server:~$ docker run --rm -d --name mp-test -v /srv/demo:/demo:rshared alpine sleep 100000
a0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0
cr0x@server:~$ docker inspect mp-test --format '{{range .Mounts}}{{.Destination}} {{.Propagation}}{{"\n"}}{{end}}'
/demo rshared
What it means: the bind mount is now configured to propagate submounts.
Decision: validate that the existing tmpfs submount becomes visible.
Task 6: Validate submount visibility after changing propagation
cr0x@server:~$ docker exec mp-test sh -lc "mount | grep /demo/submount; ls -la /demo/submount; cat /demo/submount/mounted.txt"
tmpfs on /demo/submount type tmpfs (rw,relatime)
total 4
drwxr-xr-x 2 root root 60 Jan 3 12:01 .
drwxr-xr-x 3 root root 4096 Jan 3 12:01 ..
-rw-r--r-- 1 root root 13 Jan 3 12:01 mounted.txt
mounted-file
What it means: the container now sees the host submount. Propagation fix confirmed.
Decision: decide whether rshared is acceptable or if you want rslave for one-way propagation.
Task 7: Choose rslave when you want one-way propagation
cr0x@server:~$ docker rm -f mp-test
mp-test
cr0x@server:~$ docker run --rm -d --name mp-test -v /srv/demo:/demo:rslave alpine sleep 100000
d9c8b7a6f5e4d3c2b1a0f9e8d7c6b5a4f3e2d1c0b9a8f7e6d5c4b3a2f1e0d9
cr0x@server:~$ docker exec mp-test sh -lc "mount | grep /demo/submount; cat /demo/submount/mounted.txt"
tmpfs on /demo/submount type tmpfs (rw,relatime)
mounted-file
What it means: rslave still lets host → container mount events through, but doesn’t propagate container mounts back to the host.
Decision: in most production cases, prefer rslave unless you explicitly need bi-directional propagation.
Task 8: Confirm host mount propagation state (shared vs private)
cr0x@server:~$ findmnt -o TARGET,PROPAGATION /srv/demo
TARGET PROPAGATION
/srv/demo shared
What it means: the host mount point can be in a shared peer group. If it’s private, you may need to change the host side too.
Decision: if propagation is private on the host and you need sharing, you may need mount --make-rshared (carefully).
Task 9: Safely view mount namespace differences using nsenter
cr0x@server:~$ pid=$(docker inspect -f '{{.State.Pid}}' mp-test)
cr0x@server:~$ sudo nsenter -t "$pid" -m -- findmnt -R /demo | head
TARGET SOURCE FSTYPE OPTIONS
/demo /dev/sda1 ext4 rw,relatime
/demo/submount tmpfs tmpfs rw,relatime
What it means: you are looking at mounts from inside the container’s mount namespace without relying on container tooling.
Decision: if the namespace view doesn’t include the submount, it’s propagation or timing. If it does, your app path is wrong.
Task 10: Detect “mounted over it” masking mistakes
cr0x@server:~$ sudo mount -t tmpfs tmpfs /srv/demo
cr0x@server:~$ ls -la /srv/demo | head
total 8
drwxr-xr-x 2 root root 40 Jan 3 12:02 .
drwxr-xr-x 3 root root 18 Jan 3 12:00 ..
What it means: you just mounted tmpfs over /srv/demo, hiding the original contents. Congratulations, you created “empty” the honest way.
Decision: unmount and rethink: don’t mount over your bind source unless you mean to replace it.
Task 11: Inspect mounts with recursion awareness
cr0x@server:~$ sudo umount /srv/demo
cr0x@server:~$ findmnt -R /srv/demo
TARGET SOURCE FSTYPE OPTIONS
/srv/demo /dev/sda1 ext4 rw,relatime
/srv/demo/submount tmpfs tmpfs rw,relatime
What it means: -R shows submounts. If you only run mount | grep /srv/demo you might miss the nested mount.
Decision: if the “missing data” lives on a submount, treat it as mount propagation until proven otherwise.
Task 12: Verify SELinux label issues (to avoid false positives)
cr0x@server:~$ docker run --rm -v /srv/demo:/demo:Z alpine sh -lc "ls -la /demo | head"
total 8
drwxr-xr-x 3 root root 4096 Jan 3 12:00 .
drwxr-xr-x 1 root root 4096 Jan 3 12:03 ..
-rw-r--r-- 1 root root 10 Jan 3 12:00 host.txt
What it means: if you’re on SELinux-enforcing systems, the :Z option relabels content for container access. If this fixes “empty,” it wasn’t propagation.
Decision: split diagnosis: “permission denied or hidden by policy” vs “submount not propagating.” Different fixes, different risks.
Task 13: Confirm AppArmor profile isn’t blocking mount operations (rare but nasty)
cr0x@server:~$ docker inspect mp-test --format '{{.AppArmorProfile}}'
docker-default
What it means: the container runs under a profile (common on Ubuntu). Usually this affects mount attempts from inside the container, not visibility of host mounts.
Decision: if your container is supposed to perform mounts (e.g., FUSE, CSI-like behavior), you may need extra privileges or a different design.
Task 14: Check whether the source path is itself a mount point with “private” propagation
cr0x@server:~$ findmnt -o TARGET,PROPAGATION /
TARGET PROPAGATION
/ shared
What it means: if / or the relevant parent mount is private, you can get surprising behavior with nested mounts. Distro defaults matter.
Decision: don’t blindly run mount --make-rshared / in production. Do it only if you understand the blast radius, and ideally in a maintenance window.
Task 15: See exactly what Docker thinks it mounted
cr0x@server:~$ docker inspect mp-test --format '{{range .Mounts}}{{println .Source "->" .Destination "prop:" .Propagation}}{{end}}'
/srv/demo -> /demo prop: rslave
What it means: Docker’s view is often enough to spot the problem: rprivate when you expected shared behavior.
Decision: if Docker says rprivate, don’t argue with it. Change the mount option or change the structure.
Fixes that work (and when to use them)
Fix 1: Set propagation explicitly on the bind mount
For Docker CLI, you can set it on the volume spec:
- Bi-directional:
-v /path:/path:rshared - Host-to-container only (usually safer):
-v /path:/path:rslave
When to use: the host mounts or unmounts things under the bind source after container start (NFS automounts, systemd mounts, CSI staging, loop mounts, tmpfs scratch mounts).
When not to use: when you don’t control what the container can do and you’re worried about mount events flowing back (avoid rshared unless you need it).
Fix 2: Stop mounting under a bind-mounted directory
This is the boring architectural fix: don’t create submounts under a path you’re bind-mounting into containers. Mount your dynamic filesystem elsewhere, then bind-mount the final mount point directly into the container.
Example: instead of bind-mounting /srv and later mounting NFS to /srv/data, mount NFS at /mnt/nfs/data and bind-mount /mnt/nfs/data directly.
Why this works: you avoid reliance on propagation entirely. Less magic. Fewer surprises.
Fix 3: Make the host mount shared (only if you must)
If the host side mount point is private, even a container bind mount marked rshared might not do what you want. In some setups, you need the host mount tree to be shared.
cr0x@server:~$ sudo mount --make-rshared /srv
cr0x@server:~$ findmnt -o TARGET,PROPAGATION /srv
TARGET PROPAGATION
/srv shared
What it means: mount events under /srv can propagate to peer mounts.
Decision: do this only for a subtree you control. Making / recursively shared can have surprising interactions with other services.
Fix 4: Prefer --mount syntax for clarity
The -v syntax is compact but easy to misread. The --mount form is verbose and less ambiguous:
cr0x@server:~$ docker run --rm -d --name mp-test \
--mount type=bind,source=/srv/demo,target=/demo,bind-propagation=rslave \
alpine sleep 100000
2b1a0f9e8d7c6b5a4f3e2d1c0b9a8f7e6d5c4b3a2f1e0d9c8b7a6f5e4d3c2b1
Why you should care: production outages love configuration that “looked right.” Verbose configuration looks right less often, which is a win.
Fix 5: If your container needs to mount things, reconsider the design
Some workloads (backup tools, storage agents, FUSE, plugin-like components) want to mount inside the container and have it show up on the host or in sibling containers. That’s where rshared and privileges show up, and that’s also where security reviews go to die.
Opinionated guidance: if you need container-initiated mounts to appear on the host, you’re building a storage system component. Treat it like one: least privilege, tight constraints, and explicit propagation. If you don’t need that, don’t enable it “just in case.”
Short joke #2: “Just run it privileged” is the ops equivalent of “have you tried turning it off and on again,” except it turns on new ways to break things.
Three corporate-world mini-stories
Mini-story 1: The incident caused by a wrong assumption
A team migrated a legacy batch system into containers. It had a simple contract: drop files into /incoming, process them, write results to /outgoing. On VMs, those were just directories. In the container era, they bind-mounted /srv/pipeline into the container as /pipeline and kept the internal paths the same.
Months later, storage usage spiked. Someone added an automount unit so that /srv/pipeline/incoming was backed by an NFS share during business hours. It was a clean idea: less local disk usage, easy scaling. The deploy went out on a Friday because the change was “only storage.”
That weekend the containerized worker fleet started reporting “no input files.” Metrics were weird: the NFS share was full of files, the batch controller was feeding it correctly, and the workers looked healthy. The on-call did what on-calls do: restarted workers. Nothing changed.
The root cause was mount propagation. The workers were started before the automount triggered, so inside their mount namespaces, /pipeline/incoming stayed a plain directory forever. The NFS mount happened on the host, but their bind mount was rprivate. They never saw the NFS submount.
The fix was boring and immediate: change the bind mount to rslave and redeploy. The lasting fix was more important: stop bind-mounting a broad parent like /srv/pipeline; mount NFS at a stable path and bind-mount that exact directory. That cut the blast radius when storage changes happen.
Mini-story 2: The optimization that backfired
A platform team wanted faster node bootstraps. They moved several large datasets from local disk to on-demand mounts: some were loop-mounted images, some were network shares. The idea was to “activate” mounts only when a node ran a workload that needed them.
They also standardized container run arguments: every service got a bind mount of /opt/runtime so common tools and certificates were available. It worked in dev. It worked in staging. Production started failing in a way that looked like data corruption: applications saw empty directories where datasets should be.
Turns out, the “activation” script mounted the datasets under /opt/runtime/datasets after containers were already running. The bind mount was rprivate, so containers never got the new submounts. Engineers tried to “fix it” by restarting only the failing apps, which sometimes worked—because the mount happened to be active before restart. They called it flaky. It was deterministic, just badly timed.
They changed the mount to rshared globally because it made the issue vanish fast. Two weeks later, a different incident: a debugging container with extra capabilities mounted a tmpfs in a place that propagated back to the host subtree. It didn’t destroy data, but it did confuse monitoring and made cleanup scripts misbehave.
The final state was nuanced: rslave for most services, strict capability sets, and a rule that only a dedicated agent could perform mounts. The “optimization” wasn’t wrong; the unexamined propagation and privilege assumptions were.
Mini-story 3: The boring but correct practice that saved the day
A different organization ran a fleet where storage was complicated: local SSD caches, network mounts, and occasional incident-driven remounts. They had a simple rule: no bind-mounting of broad parent directories. Only mount exactly what the container needs. If a container needs /data/app, it gets /data/app, not /data.
It was unpopular. Developers liked convenience. SREs liked convenience too, right up until they were paged. The platform team enforced the rule in code review and in CI checks for Compose specs and deployment manifests.
One day, a storage maintenance event required remounting a subtree under /data. On hosts where services bind-mounted /data historically, this would have caused partial visibility bugs. But here, most services had direct mounts of their exact paths, and only a couple of specialized agents used rslave propagation because they expected submounts.
The incident ended up being boring. A few containers were restarted as part of the maintenance anyway, but there was no mystery “empty directory” behavior. The postmortem was short and unsexy, which is the highest compliment you can give an operations practice.
Common mistakes: symptoms → root cause → fix
1) “The directory is empty in the container, but not on the host”
- Symptom:
lsshows empty; no errors. - Root cause: you mounted over the source path on the host (masking), or you’re looking at a submount that didn’t propagate.
- Fix: run
findmnt -R /host/pathto see submounts; avoid mounting over the bind source; setbind-propagation=rslaveif submounts must appear.
2) “Files appear after container restart”
- Symptom: restarting the container “fixes it,” but only sometimes.
- Root cause: timing dependency: the host mount exists at container start sometimes, other times it’s created later (automount, on-demand scripts).
- Fix: use
rslave/rsharedpropagation, or ensure mounts are up before starting containers (systemd ordering).
3) “Only subdirectories are empty”
- Symptom: base files visible, but a particular nested directory is empty.
- Root cause: that nested directory is actually a mount point on the host.
- Fix: bind-mount the nested mount point directly, or enable propagation for the parent bind mount.
4) “It works on Ubuntu but not on another distro (or vice versa)”
- Symptom: identical container command, different results across hosts.
- Root cause: different default propagation settings for
/or relevant mount points; systemd defaults differ; container runtime defaults differ. - Fix: stop relying on defaults; declare propagation explicitly; verify with
findmnt -o TARGET,PROPAGATION.
5) “We set rshared and now weird mounts show up on the host”
- Symptom: host sees mounts created by containers, or mounts linger after container exit.
- Root cause:
rsharedis bi-directional; container has capability to mount (or is privileged) and mount events propagate back. - Fix: use
rslavefor one-way propagation; reduce container capabilities; avoid privileged containers; ensure cleanup of mounts is handled by a host agent.
6) “The mount is there but access is denied”
- Symptom: files exist but you see permission errors; sometimes appears “empty” to apps.
- Root cause: SELinux context mismatch, AppArmor restrictions, UID/GID mismatch, rootless Docker limitations.
- Fix: on SELinux systems use
:Zor correct labels; align UID/GID; don’t confuse policy denials with propagation.
7) “We bind-mounted a symlink and got the wrong path”
- Symptom: container sees a different directory than expected.
- Root cause: bind mount resolves symlinks at mount time; later changes don’t affect it; or the symlink points into a mount that isn’t propagated.
- Fix: bind-mount real paths; avoid symlink indirection for storage mount points; verify with
readlink -fon host.
Checklists / step-by-step plan
Step-by-step plan: fix a production “empty bind mount” incident without guessing
- Identify the exact missing thing. Is it files, or a submount’s contents? If it’s a submount, move quickly toward propagation diagnosis.
- On the host, map the mount tree. Use
findmnt -Ron the bind source. If you see a nested mount, you found your culprit candidate. - Confirm container mount propagation.
docker inspectand look atPropagation. If it’srprivate, stop expecting submounts to appear. - Decide the safer propagation direction.
- Need host → container only: choose
rslave. - Need both directions: choose
rsharedand justify it in writing.
- Need host → container only: choose
- Implement explicitly via
--mount. Roll out, restart containers that depend on it. - Verify with
nsenteror containermount. Don’t trust application logs as your only signal. - Then fix the architecture. Prefer direct mounts of what you need rather than broad parent bind mounts. This prevents the next incident.
Checklist: when you should use rslave/rshared
- You use systemd automount units under a directory bind-mounted into containers.
- You mount network storage (NFS, CIFS) or block devices under a directory that containers already use.
- You have a “sidecar” or host agent that mounts datasets late (after containers started).
- You’re doing anything CSI-like, plugin-like, or building a storage integration.
Checklist: when you should avoid rshared
- The container is privileged or has mount-related capabilities and you don’t absolutely need mount events to go back to the host.
- You’re debugging and tempted to “just make it work.”
- You cannot guarantee cleanup behavior for mounts created by container processes.
FAQ
1) Why does my bind mount show files, but mounted filesystems under it are missing?
Because the base bind mount is present, but submount events didn’t propagate into the container’s mount namespace. Set bind-propagation=rslave or rshared, or bind-mount the submount directly.
2) What’s the difference between rshared and rslave in practice?
rshared propagates mount/unmount events both ways between peer mounts. rslave allows host → container propagation but not container → host. For most production services, rslave is the sane default when you need propagation.
3) Can mount propagation make a directory look completely empty?
Yes, if the “real data” is on a nested mount (NFS, tmpfs, loop device) and the container only sees the underlying directory. It will look empty or stale because you’re looking at the wrong layer.
4) Is this a Docker bug?
No. It’s Linux mount namespace behavior plus Docker’s (reasonable) safety defaults. The bug is the assumption that a bind mount is automatically recursive and dynamically updated for submounts.
5) Does this affect Docker volumes (named volumes) too?
It can, but it’s most common with bind mounts because you’re mapping an arbitrary host path where submounts may appear. Named volumes are managed under Docker’s storage area and usually don’t have surprise submounts—unless you create them.
6) How does systemd automount interact with containers?
Automount means the mount happens on first access, potentially after the container started. If the container’s view doesn’t receive that mount event due to rprivate, it may never see the mounted filesystem. Use rslave and consider ordering mounts before container start when possible.
7) What about rootless Docker?
Rootless environments add constraints: you may be unable to change certain mount propagation or perform mounts at all. If you’re relying on dynamic host mounts showing up inside a rootless container, consider re-architecting to avoid that dependency.
8) Does Kubernetes solve this automatically?
No. Kubernetes makes it explicit. Pods can request mount propagation modes, and the cluster must allow it. If you run storage plugins or need host mounts to appear in containers, you’ll still hit the same underlying Linux semantics.
9) What’s the safest long-term pattern to avoid this entire class of bugs?
Don’t bind-mount broad parents like /srv or /data. Mount storage at stable, dedicated mount points and bind-mount only the exact directories your container needs.
Conclusion: next steps you can apply today
If you remember one thing: a bind mount is not a live feed of mount events unless you make it one. When your container sees an empty directory but the host has data, first ask: “Is that data actually on a submount?”
Practical next steps:
- On an affected host, run
findmnt -Ron your bind source and identify any nested mounts. - Run
docker inspectand check the mount’sPropagationfield. - If you need host → container submount visibility, redeploy with
bind-propagation=rslave(orrsharedonly when justified). - Then refactor: bind-mount the exact mount points your app needs, not the whole parent directory.
That’s the difference between a one-time fix and never seeing this page again at 03:00.