Docker: Clean up safely — reclaim space without deleting what you need

Was this helpful?

It always happens at the worst time: your node is healthy, your deploy is green, and then the disk hits 100%.
Suddenly Docker won’t pull images, your runtime can’t write logs, and everything looks “mysteriously flaky.”
The temptation is to run a heroic docker system prune -a --volumes and hope for the best.

Hope is not a cleanup strategy. This is the practical, production-safe way to reclaim Docker space while keeping the things you actually need: the right images, the right volumes, and your ability to explain what happened later.

A mental model of where Docker stores space (and why it surprises you)

Docker disk usage isn’t “one thing.” It’s several buckets that grow for different reasons, and “cleaning Docker”
means choosing which buckets you’re willing to empty.

The buckets that matter

  • Images: layers pulled from registries, plus your locally built images.
  • Containers: the writable layer (changes on top of the image), plus container metadata.
  • Volumes: durable data. Databases love them. So do accidents.
  • Build cache: intermediate layers and cache metadata. In CI this becomes a compost pile.
  • Logs: container logs (usually JSON files) and application logs you write inside containers.
  • “Other”: networks, plugins, swarm objects, and leftovers from older storage drivers.

Most outages come from confusing these buckets. People treat volumes like cache. They are not cache.
People treat build cache like “it will self-manage.” It will not. People ignore logs until they become a storage
denial-of-service.

Why overlay2 makes disk usage feel haunted

On Linux, the most common Docker storage driver is overlay2. It stores image layers and container
writable layers as directories under Docker’s data root (often /var/lib/docker).
That directory can become huge, and it’s not immediately obvious which container “owns” which bits of it.

OverlayFS is efficient, but it is also very literal: every change a container makes on its filesystem is a new
file in its writable layer. If your app writes 50GB of “temporary” data to /tmp inside the container,
that is not temporary. It’s 50GB on the host, and you will meet it again when the disk fills.

Rule of thumb: if you want data to survive container removal, put it in a volume. If you want data to be easy to
delete, put it somewhere you can delete deterministically—ideally a volume you can destroy intentionally, or a
path on the host with a retention policy.

One quote worth keeping on your wall. Gene Kranz’s line is famous in reliability circles: “Failure is not an option.”
It’s also a reminder that cleanup is part of operations, not a hobby you do after an outage.

Interesting facts and small history (because it explains today’s mess)

  1. Docker’s early storage driver defaults changed over time. Older systems used AUFS or Device Mapper; many modern distros settled on overlay2 for better performance and simplicity.
  2. “Dangling images” became a thing because layers are shared. Docker keeps layers that might still be referenced. It won’t delete them until it is sure they’re unused.
  3. BuildKit changed what “build cache” means. With BuildKit enabled, cache can live in different forms and can be exported/imported, which is great—until you never prune it.
  4. Container logs default to a JSON file logger on many installations. That is convenient, but it happily writes forever unless you configure rotation.
  5. Disk-full failures in container hosts are often secondary failures. The first failure is “some process wrote too much,” Docker just made it easier to hide.
  6. “Prune” commands were added because manual cleanup was too risky. Docker introduced high-level prune operations to avoid direct manipulation of /var/lib/docker (which is still a bad idea).
  7. Layer reuse is both the optimization and the trap. Rebuilding images with many unique layers makes pull/build fast sometimes, and disk growth fast always.
  8. CI runners popularized “pet hosts” again by accident. A “stateless” runner that never gets reimaged becomes a museum of cached layers.

Joke #1: Docker disk usage is like a junk drawer—everything in there was “temporary” at the time.

Fast diagnosis playbook: first/second/third checks

When the disk is screaming, you don’t have time for a philosophical debate about layers. You need a fast sequence
that tells you where the bytes are and what you can safely delete.

First: confirm the host is actually out of disk (and where)

  • Check filesystem usage (df) and inode usage (df -i).
  • Identify which mount is full. If /var is separate, Docker might be fine and your logs aren’t.

Second: measure Docker’s buckets before you delete anything

  • docker system df to get the high-level breakdown.
  • Look for the biggest contributor: images, containers, volumes, build cache.

Third: identify “unexpected writers”

  • Huge container logs (JSON files) and app logs inside container writable layers.
  • Volumes ballooning because a database or message queue retained data.
  • Build cache growth on CI runners.

If you’re in a true emergency (disk 100%), prioritize restoring write capacity: rotate/truncate runaway logs,
stop the top offenders, or move data off-host. Then do safe cleanup with auditability.

Practical tasks (commands + output meaning + decision)

These tasks assume you have shell access to the Docker host. Run them in order when you can.
If you’re in a “disk is full, everything is on fire” moment, jump to the tasks marked as emergency-friendly.

Task 1: Verify which filesystem is full

cr0x@server:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/nvme0n1p2   80G   78G  1.2G  99% /
tmpfs            16G   92M   16G   1% /run
/dev/nvme1n1p1  200G   40G  160G  20% /data

What it means: Root filesystem is nearly full. That’s typically where /var/lib/docker lives.
Decision: Focus on Docker and system logs on /, not the roomy /data mount.

Task 2: Check inode exhaustion (sneaky, common)

cr0x@server:~$ df -i
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/nvme0n1p2  5242880 5110000 132880   98% /

What it means: You’re almost out of inodes. This can happen with millions of small files (build cache, node_modules, etc.).
Decision: “Free disk” might not fix it; you need to delete lots of small files, usually cache or build artifacts.

Task 3: Find Docker’s data root

cr0x@server:~$ docker info --format '{{ .DockerRootDir }}'
/var/lib/docker

What it means: This is the directory whose growth you’re managing.
Decision: If your root filesystem is small, plan a migration of Docker data root later. For now, diagnose usage inside it.

Task 4: Get Docker’s own disk accounting

cr0x@server:~$ docker system df
TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          52        12        38.6GB    21.4GB (55%)
Containers      19        7         2.4GB     1.1GB (45%)
Local Volumes   34        15        96.2GB    18.0GB (18%)
Build Cache     184       0         22.8GB    22.8GB (100%)

What it means: Your volumes are the biggest chunk (96GB), but build cache is fully reclaimable (22GB).
Decision: Start with build cache prune (safe-ish), then unused images/containers. Treat volumes carefully.

Task 5: List the biggest images (spot the culprits)

cr0x@server:~$ docker images --format 'table {{.Repository}}\t{{.Tag}}\t{{.ID}}\t{{.Size}}' | head
REPOSITORY            TAG        IMAGE ID       SIZE
myapp/api             prod       9c1a2d4e0b2f   1.21GB
myapp/api             staging    62f13c9d1a11   1.19GB
postgres              15         2b6d1f2aa0c1   413MB
node                  20         1c2d0a41d2aa   1.11GB

What it means: You have multiple large tags for the same repo. Common when deployments tag every commit.
Decision: Keep the ones currently running; remove old tags if unused. Later: implement retention in your pipeline.

Task 6: Identify containers still using those images

cr0x@server:~$ docker ps --format 'table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Names}}'
CONTAINER ID   IMAGE                 STATUS          NAMES
c8b2d9f2a4aa   myapp/api:prod        Up 6 days       api-prod-1
e12a9a0110bb   postgres:15           Up 12 days      pg-main

What it means: Only myapp/api:prod is actively used; staging might be dead weight.
Decision: Don’t remove images used by running containers. Remove unused ones after confirming no other hosts depend on them locally.

Task 7: Show stopped containers and their sizes

cr0x@server:~$ docker ps -a --size --format 'table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Size}}\t{{.Names}}' | head
CONTAINER ID   IMAGE              STATUS                      SIZE                NAMES
a1b2c3d4e5f6   myapp/api:staging  Exited (137) 3 days ago     1.2GB (virtual 2GB) api-staging-1
f1e2d3c4b5a6   node:20            Exited (0) 9 days ago       600MB (virtual 1GB) build-job-77

What it means: Exited containers can keep large writable layers.
Decision: Remove old exited containers if you don’t need their filesystem for forensics.

Task 8: Remove stopped containers (low risk)

cr0x@server:~$ docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
a1b2c3d4e5f6
f1e2d3c4b5a6

Total reclaimed space: 1.8GB

What it means: You reclaimed writable layers for containers that weren’t running.
Decision: Good first cleanup. If space is still tight, move to image/build cache pruning.

Task 9: Prune build cache (usually safe, sometimes expensive)

cr0x@server:~$ docker builder prune
WARNING! This will remove all dangling build cache.
Are you sure you want to continue? [y/N] y
Deleted build cache objects:
l8p4m2n7k0q1
u2v3w4x5y6z7

Total reclaimed space: 22.8GB

What it means: You deleted unused build cache. Builds may become slower until cache rebuilds.
Decision: On CI runners, this is often the right trade. On build servers, consider scheduled pruning with thresholds.

Task 10: Preview image pruning before executing

cr0x@server:~$ docker image prune --all --force --filter 'until=240h'
Deleted Images:
untagged: myapp/api@sha256:1a2b...
deleted: sha256:9c1a2d4e0b2f...

Total reclaimed space: 12.3GB

What it means: Images not used by any container and older than 10 days (240h) were deleted.
Decision: Use time filters in production. “Delete everything unused” is fine; “delete everything not running right now” is how you ruin your next deploy.

Task 11: Inspect volume usage and identify risky candidates

cr0x@server:~$ docker volume ls
DRIVER    VOLUME NAME
local     pgdata_main
local     redisdata
local     myapp_cache
local     old_test_volume_42

What it means: Volumes exist independently of containers. Some are clearly production data.
Decision: Never prune volumes blindly. Identify which containers mount them, and what they store.

Task 12: Map volumes to containers (avoid deleting live data)

cr0x@server:~$ docker ps -a --format '{{.ID}} {{.Names}}' | while read id name; do
>   echo "== $name ($id) =="
>   docker inspect --format '{{range .Mounts}}{{.Name}} -> {{.Destination}} ({{.Type}}){{"\n"}}{{end}}' "$id"
> done
== api-prod-1 (c8b2d9f2a4aa) ==
myapp_cache -> /var/cache/myapp (volume)

== pg-main (e12a9a0110bb) ==
pgdata_main -> /var/lib/postgresql/data (volume)

What it means: You now know which volumes are mounted by which containers.
Decision: Only consider deleting volumes that are (a) unmounted and (b) confirmed disposable.

Task 13: Measure actual volume sizes on disk

cr0x@server:~$ sudo du -sh /var/lib/docker/volumes/*/_data 2>/dev/null | sort -h | tail
2.1G  /var/lib/docker/volumes/myapp_cache/_data
18G   /var/lib/docker/volumes/redisdata/_data
71G   /var/lib/docker/volumes/pgdata_main/_data

What it means: Postgres is the heavyweight. That may be expected, or it may be retention gone wild.
Decision: Don’t “clean Docker” to fix database growth. Fix the database retention and plan storage.

Task 14 (emergency-friendly): Find oversized container JSON logs

cr0x@server:~$ sudo find /var/lib/docker/containers -name '*-json.log' -printf '%s %p\n' 2>/dev/null | sort -n | tail
21474836480 /var/lib/docker/containers/c8b2d9f2a4aa.../c8b2d9f2a4aa...-json.log
5368709120  /var/lib/docker/containers/e12a9a0110bb.../e12a9a0110bb...-json.log

What it means: One container log is 20GB. That’s not “observability,” it’s a hostage situation.
Decision: Rotate logs properly. In a pinch, you can truncate the file (see next task), but treat that as a break-glass move.

Task 15 (break-glass): Truncate a runaway JSON log without restarting the container

cr0x@server:~$ sudo truncate -s 0 /var/lib/docker/containers/c8b2d9f2a4aa.../c8b2d9f2a4aa...-json.log

What it means: You reclaimed space immediately. Docker keeps writing to the same file handle.
Decision: Do this only when disk pressure is critical. Then fix log rotation so you never do it again.

Task 16: Configure Docker log rotation (the real fix)

cr0x@server:~$ sudo tee /etc/docker/daemon.json >/dev/null <<'EOF'
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m",
    "max-file": "5"
  }
}
EOF
cr0x@server:~$ sudo systemctl restart docker

What it means: New containers will have capped log file growth. Existing containers may need restart to pick up options.
Decision: Adopt this as baseline everywhere. If you want centralized logs, ship them—but still cap local growth.

Task 17: Identify growth inside container writable layers

cr0x@server:~$ docker exec c8b2d9f2a4aa sh -lc 'du -sh /tmp /var/tmp /var/log 2>/dev/null | sort -h'
120M  /var/log
5.6G  /tmp
32M   /var/tmp

What it means: The app is using container filesystem for bulky temp data.
Decision: Move that path to a volume or tmpfs, or fix the app cleanup. Otherwise it will grow again after every “cleanup.”

Task 18: Use a time-filtered system prune (careful, but practical)

cr0x@server:~$ docker system prune --force --filter 'until=168h'
Deleted Networks:
old_ci_net

Deleted Images:
deleted: sha256:62f13c9d1a11...

Total reclaimed space: 6.4GB

What it means: You cleaned unused objects older than 7 days. Volumes are not touched unless you add --volumes.
Decision: This is the “safe default” prune for many hosts. It reduces risk while still controlling growth.

Task 19: Verify space recovery and confirm the host is stable

cr0x@server:~$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/nvme0n1p2   80G   52G   25G  68% /

What it means: You regained healthy headroom.
Decision: Don’t stop here. Put guardrails in place: log rotation, scheduled pruning, and build cache policy.

Three corporate mini-stories from the disk-full trenches

1) The incident caused by a wrong assumption: “Volumes are just cache, right?”

A mid-sized company ran a small fleet of Docker hosts for internal tools. Nothing fancy: a Postgres container,
a web app, a background worker. It wasn’t Kubernetes. It was “simple.” That word has a body count.

A host hit 95% disk. An engineer jumped in, saw docker system df, and noticed volumes were the biggest chunk.
They had recently cleaned build cache and images, and the volume number stayed large. So they did what the internet
always suggests when you’re tired: docker system prune -a --volumes.

The command succeeded quickly. The disk freed up dramatically. The dashboards looked relieved. Then the on-call phone
rang again, louder, because the database came back empty. The Postgres container restarted with a brand-new data
directory. The web app started creating “first user” admin accounts. This was not a new feature.

The root cause wasn’t Docker. The root cause was the assumption that volumes were disposable. In that environment,
volumes were the only durable storage. The fix was procedural and technical: label volumes by purpose, map them to
services, and protect production volumes from prune operations. They also implemented host backups because relying
on “we won’t run the bad command” is not a strategy.

The lasting lesson: the most dangerous cleanup command is the one you run when you’re stressed and think you know
what it does.

2) The optimization that backfired: “Let’s cache every build forever”

Another team ran a monorepo with heavy Docker builds. They enabled BuildKit and pushed hard on performance:
cache mounts, multi-stage builds, and aggressive layer reuse. Their CI got faster. The exec summary had happier
bars. Everyone congratulated the pipeline for being “optimized.”

Six weeks later, CI started failing in bursts. Not consistent failures—those are too kind. Random image pulls failed.
Builds failed mid-way. Sometimes apt couldn’t write temporary files. Sometimes Docker couldn’t create layers.
The failures moved around between runners. Engineers blamed the registry. Then the network. Then cosmic rays.

The real issue: the runners were “cattle,” but nobody ever reimaged them. Build cache grew without a ceiling.
Docker’s data root filled until the filesystem hit its reserved blocks and everything degraded. The “optimization”
wasn’t wrong; it was incomplete. Performance work that ignores capacity becomes reliability work, whether you like it or not.

The backfired part was cultural: the team believed build cache was purely beneficial and couldn’t hurt production.
They treated disk as infinite because it was invisible. When the disk filled, their “fast builds” stopped entirely.

The fix was boring and effective: set a pruning schedule with thresholds, add monitoring for /var/lib/docker,
and periodically reimage runners. They kept most performance wins, and the failures stopped being mysterious.

3) The boring but correct practice that saved the day: “Measure, then prune, then verify”

A payment-adjacent service ran on a handful of Docker hosts behind a load balancer. The team didn’t have a lot of
glamour, but they had discipline. Every host had a small runbook: check disk, check Docker buckets, check logs,
and only then delete. They also had log rotation configured by default.

One weekend, traffic spiked due to a partner issue. The service didn’t fall over from CPU; it fell over from logs.
The application started logging an error per request. That kind of bug is not subtle: it converts user traffic into
disk usage at scale.

The on-call engineer saw disk trending upward. Before it hit 100%, they capped the bleeding by deploying a config
change that reduced log verbosity. Because Docker logs rotated, the host didn’t die. They then used docker system df
and pruned build cache and old images in a controlled way, with time filters, across the fleet.

There was no heroic midnight command. No hand-edited files inside /var/lib/docker. Just a measured response
that kept headroom and restored normal operations. Boring, correct practice saved the day—again.

A safe cleanup strategy that works in production

The right cleanup plan depends on what kind of host you have. A CI runner is not the same as a database host.
A dev laptop is not the same as a production node. The trick is to match retention to the job.

Define what must never be deleted automatically

In most production environments, these are “protected classes”:

  • Named volumes holding data (databases, queues, uploads).
  • Images required for rollback (at least one previous version).
  • Evidence during an incident (stopped containers you plan to inspect).

If you can’t list these for your environment, you’re not ready for aggressive pruning. Start with time-filtered prune that does not touch volumes.

Use time filters to reduce surprise

The single best trick for safer cleanup is --filter until=.... It creates a buffer: “don’t delete anything created recently.”
That buffer covers ongoing deploys, rollback windows, and the fact that humans forget what they launched last Tuesday.

Pick a cleanup posture per host role

CI runner posture

  • Prune build cache aggressively (daily or on threshold).
  • Prune images older than a short window (a few days), because they’re reproducible.
  • Prefer reimaging runners periodically; it’s the cleanest garbage collector.
  • Keep volumes minimal; treat any volume as suspect.

Production application host posture

  • Enable log rotation in Docker daemon config.
  • Use time-filtered system prune without volumes, on a schedule.
  • Keep rollback images for a defined window.
  • Alert on disk usage and on Docker root growth.

Stateful host posture (databases on Docker)

  • Do not prune volumes automatically. Ever. If you must, use explicit allowlists.
  • Capacity plan the volume growth. “Docker cleanup” is not a database retention policy.
  • Use backups and tested restores. Cleanup mistakes here are career-limiting.

Joke #2: If you run databases in Docker and your backup plan is “we’ll be careful,” you’re not careful—you’re optimistic.

Common mistakes: symptom → root cause → fix

1) Symptom: Disk is full, but docker system df doesn’t show enough usage

Root cause: The space is in non-Docker paths (system logs, core dumps), or Docker’s data root isn’t where you think, or the filesystem is out of inodes.

Fix: Check df -h and df -i. Find top directories with sudo du -xhd1 /var (and other mounts). Confirm Docker root with docker info.

2) Symptom: /var/lib/docker/containers/*-json.log files are huge

Root cause: No log rotation configured for Docker’s json-file logger, or app is logging too much.

Fix: Configure daemon.json log options; reduce app log verbosity; ship logs off-host if needed. Truncate only as break-glass.

3) Symptom: Cleanup doesn’t free space immediately

Root cause: Deleted files still held open by a process (common with logs), or filesystem reserved blocks, or thin-provisioning quirks on certain drivers.

Fix: Identify open deleted files using lsof (not shown in tasks, but it’s the move). Restart the process if safe. Verify actual freed space with df, not hope.

4) Symptom: You pruned images and now a deploy fails to pull/build

Root cause: You removed local images relied upon for fast rollouts, or your registry/network path is slower/unreliable, or you pruned too aggressively during a deploy window.

Fix: Use time filters; coordinate prune schedules outside deploy windows; keep a rollback window of images; ensure registry access is robust.

5) Symptom: Disk grows steadily even though you prune weekly

Root cause: Growth is in volumes (state) or inside container writable layers (apps writing temp data), not in images/cache.

Fix: Measure volume sizes; fix app paths to use volumes/tmpfs; implement retention at the application/data layer.

6) Symptom: After docker system prune, a service loses data

Root cause: Volumes were pruned (--volumes) or a “named” volume was actually disposable but not backed up; no guardrails.

Fix: Never use --volumes in production without explicit mapping and backups. Use labels, inventories, and backups with restore tests.

7) Symptom: Host is fine, but Docker operations fail with “no space left on device”

Root cause: The Docker filesystem (or storage driver metadata) is constrained, or inodes are exhausted, or /var mount is full though / is not.

Fix: Confirm mounts; check inodes; ensure Docker root is on a sufficiently large filesystem; consider moving Docker data root.

Checklists / step-by-step plan

Emergency checklist (disk > 95%, production impact)

  1. Stop the bleeding: identify runaway writers (logs, temp files). If it’s logs, cap verbosity and rotate/truncate break-glass.
  2. Get headroom: prune stopped containers, then build cache, then unused images with a time filter.
  3. Verify: confirm with df -h that you have real free space again.
  4. Stabilize: add Docker log rotation if missing. Schedule follow-up work.

Production-safe routine cleanup (weekly or daily, depending on churn)

  1. Measure: docker system df and record it (even in a ticket). Trending beats guesswork.
  2. Prune: docker system prune --force --filter 'until=168h' (no volumes).
  3. Prune build cache separately if build-heavy: docker builder prune --force --filter 'until=168h'.
  4. Review: list top images and confirm retention policy matches reality.
  5. Verify: check df -h and alert thresholds.

“Don’t wake me up again” hardening plan

  1. Enable log rotation at Docker daemon level.
  2. Alert on disk (filesystem usage) and on Docker root directory growth.
  3. Set CI runner policies: periodic reimage + aggressive build cache pruning.
  4. Label and inventory volumes: know which are data and which are disposable.
  5. Make rollback explicit: keep last N deploy images, prune the rest by time.
  6. Capacity plan stateful volumes: if the DB grows, that’s a product decision, not a Docker decision.

FAQ

1) Is docker system prune safe in production?

With guardrails: yes, often. Use --filter until=... and do not add --volumes unless you are very sure.
Default docker system prune removes stopped containers, unused networks, dangling images, and build cache.

2) What’s the difference between dangling and unused images?

“Dangling” images are typically untagged layers left behind by builds. “Unused” means not referenced by any container.
docker image prune targets dangling by default; docker image prune -a targets unused images too.

3) Why does Docker keep images I haven’t used in weeks?

Because Docker doesn’t know your intent. Images might be used for quick rollbacks or future starts. If you want retention,
you must define retention: time filters, scheduled pruning, or CI policies.

4) Can I delete files directly from /var/lib/docker to save space?

Don’t. You’ll desynchronize Docker’s metadata from reality and create harder-to-diagnose failures. Use Docker commands.
The only exception people take in emergencies is log truncation, and even that should be followed by proper rotation.

5) Why did pruning not free space even though it reported reclaimed GB?

Commonly because deleted files are still held open, especially log files. The filesystem won’t reclaim space until the
last handle closes. Also check that you’re looking at the correct mount and that inodes aren’t the limiting factor.

6) Should I use docker volume prune?

Only when you understand exactly what “unused volume” means in your environment. Volumes not currently referenced by
any container are considered unused—even if they contain your only copy of important data.

7) What’s a good default log rotation for Docker?

For many services: max-size around 50–100MB and max-file 3–10. Tune based on your incident response needs.
If logs are important, ship them out. If logs are spam, fix the spam first.

8) Why does build cache get so big on CI?

CI produces lots of unique layers: different commits, different dependencies, cache mounts, and parallel builds.
Without pruning or reimaging, cache only grows. It does not age out automatically.

9) Is it better to prune regularly or to reimage hosts?

For CI runners: reimaging is excellent because it guarantees a clean baseline. For stable production hosts: regular,
conservative pruning plus monitoring is usually better than frequent reimaging. Do both when it makes sense.

10) How much free space should I keep on a Docker host?

Keep enough headroom for your largest expected image pull/build plus operational spikes (logs, deploy bursts).
Practically: aim for at least 15–25% free on the filesystem hosting Docker’s data root, more on build-heavy nodes.

Next steps you can actually do this week

If you want fewer disk-full surprises, do these in order:

  1. Turn on Docker log rotation in /etc/docker/daemon.json and restart Docker during a maintenance window.
  2. Put docker system df in your monitoring routine: track images, volumes, and build cache growth over time.
  3. Adopt time-filtered pruning on a schedule: start with until=168h and adjust based on deploy cadence.
  4. Make volumes explicit: know which ones are data, who owns them, and how they’re backed up.
  5. Fix the real writers: if your app writes huge temp data to container FS or logs nonstop, cleanup is just a recurring meeting with the same problem.

Disk is cheap. Outages are not. Clean up Docker like you run a production system: measure first, delete second, verify always.

← Previous
Toast Notifications UI with CSS: Stacking, Animations, Placement Variants
Next →
Proxmox restore speed: tuning PBS, compression choices, and why restores are slow

Leave a comment