You don’t realize how much of your app is “just files” until you try to move it. A host dies, a cloud team “helpfully”
replaces instances, or you decide to switch distros because a CVE made your weekend plans evaporate. Suddenly your containers
are portable and your data is… not.
Bind mounts and named volumes both look like “persistent storage” in Docker. They are not equivalent. One migrates like a
normal filesystem. The other migrates like a Docker implementation detail. Pick wrong, and the migration works—until the
first time you need it to.
The opinionated thesis: what survives migrations better
If your top priority is “I can lift this host’s storage and re-home it with minimal drama,”
bind mounts survive migrations better—when you control the host filesystem layout.
They’re explicit, visible, and transferable with the same tools you already trust for data movement: rsync, snapshots,
backups, replication, restores, checksums. You can reason about them without asking Docker for permission.
If your top priority is “I want Docker to manage the storage location and lifecycle and I’m okay treating it like
an application artifact,” named volumes are fine—and often easier for day-2 ops inside one host.
They reduce foot-guns like typos in paths, and they’re cleaner in Compose stacks. But they don’t automatically migrate well.
You have to export/import them on purpose.
My default in production is boring:
-
Databases and stateful services: bind mount to a host directory that lives on a real storage substrate
you already back up (LVM, ZFS dataset, EBS volume, SAN LUN, whatever your org calls “storage”). - App-level mutable data that can be rebuilt: named volumes are acceptable, sometimes preferred.
- Anything you might need to restore under pressure: avoid “Docker-only” storage unless you have rehearsed volume exports.
One more opinion: migrations fail because people confuse “container portability” with “data portability.”
The container is the easy part. The bytes are the job.
What actually happens on disk
Bind mounts: Docker as a guest, your filesystem as the source of truth
A bind mount is Docker saying: “Take this /path/on/host and present it at /path/in/container.”
Docker doesn’t own the data. Docker doesn’t even get to pretend. The host path exists, permissions apply, SELinux/AppArmor
policies apply, and your storage team’s snapshotting or replication applies.
Migration implication: if you can move that host path—by moving the underlying disk, snapshotting a dataset, restoring a backup,
or rsyncing directories—you can move the data. Docker doesn’t add a special format.
Named volumes: Docker as landlord (and you’re renting space)
A named volume is Docker saying: “I will allocate storage for this volume, managed by a volume driver.”
Most of the time, the driver is local. On Linux with the default engine, that often means Docker stores volume data under:
/var/lib/docker/volumes/<volume>/_data.
That path is not a stable API. It’s an implementation detail that changes with:
the storage driver, rootless mode, Docker Desktop’s VM layer, distro packaging, and whatever policy your security team forces next quarter.
Migration implication: to move named volumes safely, treat them like a Docker-managed asset:
inspect, export, move, import, verify. Do not “just copy /var/lib/docker” unless you’ve tested that exact engine version,
filesystem, and storage driver combination. Sometimes it works. Sometimes it works until it doesn’t.
Joke #1: Docker volumes are like office plants: nobody remembers who owns them until it’s time to move buildings.
Facts and historical context (the stuff that explains today’s mess)
-
Docker originally encouraged ephemeral containers (2013-era culture): “rebuild, don’t patch.”
Persistent data was intentionally externalized, which is why volumes feel bolted-on rather than native. -
Named volumes were a usability fix for “bind-mount everything,” which got messy in Compose files and brittle across hosts
with different directory layouts. -
The
localvolume driver is not a universal contract. On Linux it maps to the host filesystem;
on Docker Desktop it maps to a VM filesystem you don’t directly control the same way. -
Storage drivers changed over time (AUFS → overlay2 becoming common). The behavior of “copying Docker’s data dir”
is tightly coupled to the storage driver’s on-disk format. -
Compose v2 made “named volume by default” more common because it’s convenient and avoids path assumptions.
Convenience is how you get technical debt with a smile. -
Rootless Docker and user namespaces shifted volume permission problems from “rare edge case” to “routine surprise,”
especially with bind mounts where UID/GID mapping matters. -
SELinux labeling has been a long-running migration gotcha. Moving bind-mounted data between hosts with different SELinux contexts
can break workloads even if the bytes are correct. -
Kubernetes normalized the idea of explicit persistent volumes with drivers and claims. Docker named volumes look similar,
but they are not the same migration story or abstraction level.
Migration survival matrix: bind mounts vs named volumes
What “survive migrations” really means
“Survive” is doing a lot of work here. A migration is successful when:
- Data arrives intact (checksums match, not just “it starts”).
- Permissions and ownership are correct for the container runtime model.
- Performance characteristics are not catastrophically different (IOPS, fsync latency, page cache behavior).
- Operational workflows still work (backups, restores, incident debugging).
Bind mounts: strengths and failure modes
Strengths:
- Directly migratable with standard tools.
- Transparent: you can see the data without Docker running.
- Compatible with host-level storage features: snapshots, replication, quotas, encryption, dedupe, compression.
- Better for compliance when auditors ask, “Where is the data?” and you want to answer without a Docker lecture.
Failure modes:
-
Path coupling: your stack assumes
/srv/app/dbexists on every host.
If you replace a host and forget that directory (or its mount), the container starts on an empty directory. Congratulations, you invented data loss. -
Permissions mismatch: the container user expects UID 999 but your restored files are owned by 1001.
“Permission denied” is the polite version of this. - SELinux/AppArmor friction: bytes are present, access is blocked. This is the most annoying correct behavior in Linux.
-
Accidental host exposure: a bind mount of
/or/var/run/docker.sockis a security event waiting to happen.
Named volumes: strengths and failure modes
Strengths:
- Portability of configuration: Compose files don’t hardcode host paths.
- Safer by default: fewer “oops I mounted the wrong directory” disasters.
- Isolation: the data is tucked under Docker’s management, reducing casual tampering.
- Driver flexibility: you can use plugins to point to NFS, block devices, or cloud storage (but now you’re operating a driver too).
Failure modes:
-
Migration requires Docker-aware steps. If you treat volume data as an opaque blob under
/var/lib/docker,
you inherit all Docker’s internal complexity. - Hidden dependency on the engine setup: rootless, Desktop VM, storage driver, filesystem choice.
-
Backup blind spots: infrastructure backup tools often exclude
/var/lib/dockerbecause it’s “rebuildable.”
Your named volumes may be living in the excluded zone.
So what survives better?
In migrations, explicit beats implicit. Bind mounts are explicit: you can see and move them.
Named volumes can survive just fine if you operationalize exports and restores and you don’t rely on undocumented paths.
Most teams don’t rehearse that. They rehearse “re-deploy containers,” not “re-home state.”
Practical tasks (commands, outputs, and decisions)
This is the part where you stop guessing. Each task includes: a command, what the output means, and what decision you make.
Run them on the source host before migration, and again on the destination host after.
Task 1: List containers and their mounts (find what’s actually stateful)
cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Mounts}}'
NAMES IMAGE MOUNTS
pg01 postgres:16 pgdata
api01 myorg/api:2.4.1 /srv/api/config:/app/config
grafana01 grafana/grafana:11 grafana-storage
Meaning: mounts column mixes named volumes (e.g., pgdata) and bind mounts (host paths show as host:container).
Decision: anything that holds business state gets a migration plan. “It’s just a cache” is not a plan.
Task 2: Inspect a container’s mounts (truth, not vibes)
cr0x@server:~$ docker inspect pg01 --format '{{json .Mounts}}'
[{"Type":"volume","Name":"pgdata","Source":"/var/lib/docker/volumes/pgdata/_data","Destination":"/var/lib/postgresql/data","Driver":"local","Mode":"z","RW":true,"Propagation":""}]
Meaning: Type tells you bind vs volume. Source for a local named volume often points under Docker’s data dir.
Decision: if it’s a named volume, plan export/import. If it’s a bind mount, plan filesystem migration and permission checks.
Task 3: Inspect a named volume (driver, mountpoint, labels)
cr0x@server:~$ docker volume inspect pgdata
[
{
"CreatedAt": "2025-11-02T09:14:21Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "billing",
"com.docker.compose.volume": "pgdata"
},
"Mountpoint": "/var/lib/docker/volumes/pgdata/_data",
"Name": "pgdata",
"Options": null,
"Scope": "local"
}
]
Meaning: Scope: local means “this is not magically shared.” Mountpoint is where the bytes live on this host.
Decision: if you expected HA storage, you don’t have it. If you expected “auto-migrate,” you don’t have that either.
Task 4: Confirm the host path behind a bind mount exists and is mounted correctly
cr0x@server:~$ ls -ld /srv/api/config
drwxr-x--- 2 root api 4096 Jan 2 09:10 /srv/api/config
cr0x@server:~$ findmnt -T /srv/api/config
TARGET SOURCE FSTYPE OPTIONS
/srv /dev/nvme0n1p2 ext4 rw,relatime
Meaning: bind mount data lives on whatever filesystem backs that path; here it’s ext4 on a local NVMe partition.
Decision: if findmnt shows the wrong source (or nothing), fix mounts before migrating. Otherwise you’ll migrate the empty placeholder directory.
Task 5: Check free space and inodes (migrations fail because you ran out of “not space”)
cr0x@server:~$ df -h /srv /var/lib/docker
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p2 200G 120G 71G 63% /srv
/dev/nvme0n1p3 100G 92G 3.5G 97% /var/lib/docker
cr0x@server:~$ df -i /var/lib/docker
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme0n1p3 655360 621104 34256 95% /var/lib/docker
Meaning: Docker’s data partition is nearly full and inode-starved. That’s a classic “things get weird” setup.
Decision: don’t migrate a mess into a new host. Fix capacity first or change where Docker stores data.
Task 6: Identify the Docker data root (don’t assume /var/lib/docker)
cr0x@server:~$ docker info --format '{{.DockerRootDir}}'
/var/lib/docker
Meaning: this is where Docker stores images, layers, and local named volumes (unless a driver changes that).
Decision: for named volumes, you’ll export from here; for bind mounts, this is mostly irrelevant except for engine migration strategies.
Task 7: Find which volumes are used vs orphaned (reduce migration scope)
cr0x@server:~$ docker volume ls
DRIVER VOLUME NAME
local billing_pgdata
local grafana-storage
local old_tmpdata
cr0x@server:~$ docker ps -a --format '{{.Names}}' | wc -l
7
Meaning: volume list includes orphaned volumes. Containers count tells you how many workloads might reference them.
Decision: identify orphans before migrating; otherwise you’ll faithfully carry dead bytes like family heirlooms nobody wanted.
Task 8: Map a named volume back to the containers using it
cr0x@server:~$ docker ps -a --filter volume=billing_pgdata --format 'table {{.Names}}\t{{.Status}}\t{{.Image}}'
NAMES STATUS IMAGE
pg01 Up 3 hours postgres:16
Meaning: only pg01 uses this volume.
Decision: migration plan can be per-volume; you don’t need to export everything “just in case.”
Task 9: Export a named volume to a tarball (portable and boring)
cr0x@server:~$ docker run --rm -v billing_pgdata:/data -v /backup:/backup alpine:3.20 sh -c 'cd /data && tar -cpf /backup/billing_pgdata.tar .'
cr0x@server:~$ ls -lh /backup/billing_pgdata.tar
-rw-r--r-- 1 root root 2.1G Jan 2 10:01 /backup/billing_pgdata.tar
Meaning: you have a single artifact representing the volume contents.
Decision: this is the safe default for named volume migrations. If the tarball is huge, consider downtime vs incremental strategies.
Task 10: Import a named volume tarball on the new host (and verify it’s not empty)
cr0x@server:~$ docker volume create billing_pgdata
billing_pgdata
cr0x@server:~$ docker run --rm -v billing_pgdata:/data -v /backup:/backup alpine:3.20 sh -c 'cd /data && tar -xpf /backup/billing_pgdata.tar'
cr0x@server:~$ docker run --rm -v billing_pgdata:/data alpine:3.20 sh -c 'ls -la /data | head'
total 72
drwx------ 19 999 999 4096 Jan 2 10:05 .
drwxr-xr-x 1 root root 4096 Jan 2 10:05 ..
drwx------ 5 999 999 4096 Jan 2 10:05 base
-rw------- 1 999 999 88 Jan 2 10:05 postmaster.opts
Meaning: files exist and ownership resembles typical Postgres (UID 999 here).
Decision: if ownership is wrong, fix it before starting Postgres, not after it fails halfway through recovery.
Task 11: Migrate a bind mount with rsync (preserve ownership, xattrs, ACLs)
cr0x@server:~$ sudo rsync -aHAX --numeric-ids --delete /srv/api/config/ cr0x@newhost:/srv/api/config/
sending incremental file list
./
app.yaml
secrets.env
sent 24,198 bytes received 91 bytes 48,578.00 bytes/sec
total size is 18,422 speedup is 0.76
Meaning: -aHAX keeps metadata; --numeric-ids avoids user name mismatches across hosts.
Decision: if rsync reports massive deletions unexpectedly, stop and confirm you didn’t point at the wrong path.
Task 12: Verify checksums for a bind-mounted directory (integrity, not optimism)
cr0x@server:~$ cd /srv/api/config && sudo find . -type f -maxdepth 2 -print0 | sort -z | xargs -0 sha256sum > /tmp/api-config.sha256
cr0x@server:~$ head /tmp/api-config.sha256
b2e58d0c5c2de0c8c61b49e0b8d8c0c95bdb9b5c5b718e3cc7c4f2c1f8f91ac4 ./app.yaml
c1aa9b3e0c6c4e99c7b2e5fd3d9b2a7a1bb6c32d08d5f9d7c7089a8d77c0e0a2 ./secrets.env
Meaning: a checksum manifest. Run the same on the destination and compare.
Decision: for critical config/state, checksum comparisons beat “it seems fine” every day of the week.
Task 13: Detect UID/GID mismatches that will break containers after migration
cr0x@server:~$ stat -c '%n %u:%g %a' /srv/api/config
/srv/api/config 0:1002 750
cr0x@server:~$ getent group api
api:x:1002:
Meaning: directory is group-owned by GID 1002. If the new host assigns api a different GID, group access breaks.
Decision: standardize system UID/GID allocation (or use numeric IDs consistently) for bind-mounted paths.
Task 14: Check SELinux mode and context (bind mount migrations love to fail here)
cr0x@server:~$ getenforce
Enforcing
cr0x@server:~$ ls -Zd /srv/api/config
unconfined_u:object_r:default_t:s0 /srv/api/config
Meaning: SELinux is enforcing and the directory has default_t, which often blocks container access.
Decision: use correct labels (or mount options in your container definition). Migrating data without restoring proper contexts yields “permission denied” with extra steps.
Task 15: Observe live IO latency symptoms (is the “migration” actually a storage regression?)
cr0x@server:~$ iostat -x 1 3
Linux 6.8.0 (server) 01/02/2026 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
7.42 0.00 2.13 9.77 0.00 80.68
Device r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme0n1 21.0 180.0 1.2 14.8 177.4 2.10 11.6 2.1 12.7 0.45 90.4
Meaning: await ~11ms and %util ~90% suggests the disk is busy and latency is non-trivial.
Decision: if the destination host shows worse latency, your migration “worked” but your database will complain loudly. Fix storage class before blaming Docker.
Task 16: Confirm which mount type is used by a running container (sanity check after cutover)
cr0x@server:~$ docker inspect api01 --format '{{range .Mounts}}{{.Type}} {{.Source}} -> {{.Destination}}{{"\n"}}{{end}}'
bind /srv/api/config -> /app/config
Meaning: post-migration, you confirm the container uses the intended mount type and path.
Decision: if you expected a bind mount and see a volume, you’ve accidentally changed semantics. Stop and fix before you accumulate divergent state.
Fast diagnosis playbook
You’ve migrated. Something is slow, broken, or “works on one host but not the other.” Start here. This sequence is designed to find the bottleneck quickly,
not to satisfy your curiosity about every layer.
1) Confirm you’re mounting what you think you’re mounting
-
Run
docker inspect <container>and check.MountsforType,Source, andDestination.
A surprising number of outages are “it mounted an empty directory.” -
For bind mounts, run
findmnt -T /host/pathto ensure it’s backed by the intended filesystem/device.
2) Check permissions and security labels
-
Compare
statoutput on the source vs destination. UID/GID drift is common after rebuilds. -
If SELinux is enforcing, check contexts with
ls -Z. If AppArmor is in play, check profile confinement (often visible in container logs).
3) Check storage health before application tuning
-
Look at device latency (
iostat -x) and filesystem saturation (df -h,df -i). - If it’s a database and you’re seeing timeouts, assume fsync latency until proven otherwise.
4) Only then inspect Docker-specific layers
-
Confirm Docker root dir (
docker info), storage driver, and whether rootless mode is used. - If you copied Docker’s data directory across hosts, stop and verify engine version and storage driver compatibility before going deeper.
A paraphrased idea often attributed to Werner Vogels: “Everything fails, all the time; design and operate as if that’s true.”
Three corporate mini-stories (how people actually get hurt)
Mini-story 1: The incident caused by a wrong assumption
A mid-sized company ran a billing service with Postgres in Docker. The Compose file used a named volume: billing_pgdata.
The team’s mental model was: “Named volume equals persistent, persistent equals survives host replacement.”
Nobody wrote down where the bytes lived. Nobody needed to. Until they did.
A routine OS upgrade turned into “let’s rebuild the instance.” The infra team terminated the old VM after the new one passed health checks.
Containers came up immediately. Postgres came up immediately too—on a fresh, empty volume created on the new host. The application also came up and began
happily creating new rows in a brand-new database that looked perfectly healthy. Alerting stayed green because uptime was green.
The first human noticed when finance asked why yesterday’s invoices were missing. By then, the old host’s local volume data was gone.
“But it was a volume” turned out to be a statement about Docker’s API, not about the infrastructure lifecycle.
The fix wasn’t heroic. They rebuilt from backups (which existed, thankfully) and wrote a migration runbook that treated named volumes
as exportable artifacts. They also added a canary check: the app refuses to start if a schema fingerprint isn’t present.
The outage was a lesson in assumptions. The data didn’t disappear. Their understanding did.
Mini-story 2: The optimization that backfired
Another org wanted “cleaner Compose files” and fewer host-specific paths. They replaced bind mounts with named volumes across the board,
including for a write-heavy service that used SQLite (yes, in production; yes, it happens).
Their rationale: named volumes reduce path mistakes and “Docker manages it better.”
Performance tanked after the move—not because named volumes are inherently slow, but because of where they ended up.
Docker’s root directory lived on a smaller partition backed by networked storage optimized for throughput, not latency.
SQLite’s fsync pattern punished the storage. The app didn’t crash; it just got slower in ways that looked like CPU starvation and
“maybe we need more pods,” even though there were no pods involved.
The team chased ghosts: they tuned PRAGMAs, added caches, and considered changing the ORM. The real issue was dull:
they moved stateful IO from a fast local disk (where the bind mount lived) to a high-latency storage tier (where /var/lib/docker lived).
Same bytes. Different physics.
The rollback was equally dull: put the database back on a bind mount pointing to the fast disk, keep named volumes for less sensitive data,
and treat Docker root placement as a first-class capacity/performance decision. The optimization wasn’t wrong in concept.
It was wrong in context. Context is where outages live.
Mini-story 3: The boring but correct practice that saved the day
A global enterprise ran a bunch of internal services on Docker hosts that were frequently replaced by an automation pipeline.
Nothing fancy. The boring part: every host had a dedicated mount point layout for stateful data:
/srv/state/<service>, with one filesystem per major stateful component. Each filesystem had snapshots and backup policies.
Their containers used bind mounts exclusively for state. The Compose files weren’t “portable” in the sense that you couldn’t just run them
on your laptop without creating directories. But the production environment was consistent, and consistency is basically uptime in a trench coat.
When a host suffered filesystem corruption (it happens), the response was almost boring:
cordon the host, restore the affected filesystem from last snapshot, attach it to a replacement host, restart containers.
The incident report was short because the system design already assumed replacement.
Their secret weapon wasn’t a tool. It was a practice: every quarter, they rehearsed restoring one service’s state onto a new host.
Not because they expected to. Because they expected not to, and they didn’t trust that feeling.
Joke #2: The only thing more permanent than a Docker container is the temporary workaround you used to migrate its data.
Common mistakes: symptom → root cause → fix
1) “It started fine, but all the data is gone”
Symptom: service boots with empty database or default config after migration.
Root cause: bind mount path didn’t exist or wasn’t mounted; Docker created an empty directory and mounted it.
With named volumes, you created a new volume instead of importing the old data.
Fix: verify host mounts with findmnt; add startup checks (schema fingerprint, expected files).
Pin volume names in Compose; export/import volumes explicitly.
2) “Permission denied” inside the container after moving data
Symptom: application logs show failures to read/write mounted paths; Postgres complains about data directory ownership.
Root cause: UID/GID mismatch (especially across distros), user namespaces/rootless mode differences, or missing ACLs/xattrs.
Fix: migrate with rsync -aHAX --numeric-ids; standardize UIDs/GIDs; consider running containers with explicit user:.
For SELinux, set correct contexts or use appropriate mount labeling options.
3) “Works on Ubuntu host, fails on RHEL host”
Symptom: bind-mounted paths unreadable; containers can’t access files that exist.
Root cause: SELinux enforcing with incorrect labels on the bind-mounted directory.
Fix: apply correct SELinux labels to the directory or use appropriate container mount options and policies.
Validate with ls -Z before declaring Docker “broken.”
4) “Migration succeeded, but performance is terrible”
Symptom: increased latency, timeouts, slow queries, high IOwait.
Root cause: volume data moved to slower storage tier (often because Docker root dir is on a different disk than before),
or filesystem options differ (barriers, journaling mode, atime).
Fix: confirm device and filesystem with findmnt, iostat -x, df.
Put stateful bind mounts on the intended disk. If using named volumes, relocate Docker root to a suitable filesystem or use an appropriate volume driver.
5) “We copied /var/lib/docker and now Docker won’t start / containers are corrupted”
Symptom: Docker daemon errors, missing layers, overlay filesystem issues.
Root cause: engine version mismatch, storage driver mismatch, filesystem incompatibility, or partial copy without xattrs.
Fix: avoid migrating the entire Docker data root as a strategy unless you’ve tested it for that exact environment.
Export/import named volumes and rebuild images from registries. For emergency recovery, match versions and copy with metadata preserved.
6) “Backups exist, but restore doesn’t work”
Symptom: backups restore files, but service won’t boot or data is inconsistent.
Root cause: you backed up a live database directory without application-consistent snapshots; or you restored without permissions/xattrs.
Fix: use database-native backups (or coordinated snapshots). For filesystem-level backups, quiesce the service or use snapshot features properly.
Checklists / step-by-step plan
Plan A: If you want migrations to be boring (recommended)
- Inventory mounts: run
docker psanddocker inspectto list stateful paths and volumes. - Classify data: database, user uploads, cache, build artifacts, config, secrets. Only some of this deserves pain.
- Choose storage ownership:
- Critical state: bind mount to
/srv/state/<service>on a dedicated filesystem. - Replaceable state: named volumes are acceptable.
- Critical state: bind mount to
- Standardize host layout: same mount points across hosts, ideally provisioned by automation.
- Standardize identity: ensure service accounts and numeric IDs won’t drift across hosts.
- Back up the right thing: bind mount filesystems via your normal backup system; named volumes via periodic export jobs (tar to backup storage).
- Rehearse restore: once per quarter, pick one service and restore it to a fresh host. Measure time-to-usable.
- Add guardrails: app refuses to start if expected data signature is missing; alert on “fresh init” events.
Plan B: If you already used named volumes everywhere
- Enumerate named volumes:
docker volume lsand map them to containers. - Export volumes: tar each named volume to a backup path with a consistent naming scheme.
- Store with integrity: compute checksums of tarballs and store alongside them.
- Import on destination: create volumes first, then extract tarballs into them.
- Verify ownership and content: list key directories inside the volume via a temporary container.
- Cut over: start services, run smoke tests, validate data presence (not just health endpoints).
Plan C: If you need “fastest possible” host replacement
This is where you pay for prior decisions. Fast replacements happen when state lives on storage that can be detached/attached or replicated
independently of the compute host.
- Put state on a detachable unit: separate disk, dataset, or network volume.
- Bind mount that unit: containers mount
/srv/statepaths that point to that unit. - Automate attachment: replacement host boots, attaches storage, mounts it, then starts containers.
- Test failure: simulate host loss and measure time to recover. If you don’t measure it, you don’t have it.
FAQ
1) Are named volumes “safer” than bind mounts?
They’re safer against certain human mistakes (wrong path, accidentally mounting /), but they’re not inherently safer for durability.
Durability comes from the storage underneath and your backup/restore discipline.
2) Can I just copy /var/lib/docker/volumes to migrate named volumes?
Sometimes. Enough people have done it to make it sound normal. But it’s coupled to engine version, storage driver, and filesystem details.
If you need a repeatable migration, export/import with tar (or use a driver that provides portability).
3) Why do bind mounts “survive migrations” better if they’re tied to host paths?
Because the path is your contract, and you can enforce it with automation. A bind mount is just normal data on a normal filesystem.
That plays nicely with mature backup/replication tooling. Named volumes are normal files too, but hidden behind Docker’s management and placement.
4) What about Docker Desktop on macOS/Windows?
The engine runs inside a VM. Named volumes live in that VM. Bind mounts cross the VM boundary and have their own performance and semantics quirks.
For migrations, treat Desktop volumes as “local to that machine” unless you deliberately export them.
5) Which is faster: bind mounts or named volumes?
On Linux with local storage, performance is usually similar because both end up as filesystem IO. The real differences are:
where the data physically resides, mount options, and security layers. Measure on your hardware; don’t rely on folklore.
6) Do named volumes help with permissions?
They can, because Docker creates and owns the directory under its root and often keeps it consistent. But permissions still matter inside the volume.
If your container runs as non-root, you still need correct ownership.
7) What’s the best practice for databases in Docker?
Put the database on storage you can back up and restore reliably. In many orgs that means a bind mount to a dedicated filesystem or managed block device,
plus application-consistent backup tooling. Named volumes are acceptable if your export/import and backups are equally disciplined.
8) How do I prevent “empty directory mounted” disasters with bind mounts?
Create the directory and mount it via automation. Add a pre-start check: if the directory doesn’t contain an expected marker file, fail fast.
And verify host mounts with findmnt during deployment.
9) If I’m using Docker Compose, should I prefer named volumes for portability?
Prefer portability only if it doesn’t sabotage recoverability. For dev: named volumes are great. For prod state: use bind mounts to stable host paths,
or accept named volumes but treat export/import as part of your deployment lifecycle.
10) What about volume drivers (NFS, cloud, etc.)?
They can solve migration by putting data on shared or detachable storage, but they add a new failure domain: the driver and the network/storage service.
Use them when you need shared state or rapid re-homing, and test their failure behavior under load.
Practical next steps
- Run an audit today: list all mounts and classify which ones are business state. If you can’t name it, you can’t protect it.
-
Pick one migration path and standardize it:
bind mounts on a consistent/srv/statelayout, or named volumes with scripted exports and checksum verification. -
Move Docker root intentionally: if named volumes matter to you, the filesystem under
DockerRootDiris production storage.
Treat it like it. - Write the “restore under pressure” runbook: include exact commands to export/import, fix ownership, validate data presence, and verify performance.
- Rehearse: do a full restore to a fresh host. Time it. Fix what hurts. Repeat until it’s boring.
Bind mounts aren’t magical. Named volumes aren’t bad. The difference is whether your migration plan depends on Docker internals or on filesystem fundamentals.
If you want migrations to be routine, build on fundamentals.