Docker: Keep Containers from Booting on Updates — Pin Images Responsibly

Was this helpful?

You wake up, check the dashboards, and your “boring” service is no longer boring. A container restarted after a routine host patch,
pulled a different image than yesterday, and now your login flow is auditioning for a tragedy.

This is the quiet terror of mutable container tags: everything looks pinned—until it isn’t. Let’s fix it properly: keep containers from
changing on updates, without turning your fleet into a museum of unpatchable images.

What you’re actually trying to control

“Keep containers from booting on updates” usually means one of three things, and you should name which one you mean because the fixes differ:

1) Don’t change the bits when something restarts

You accept that containers may restart (kernel updates, daemon restarts, node reboots, orchestrator reschedules), but you want the restarted
container to use the exact same image bytes as before. This is the image pinning problem.

2) Don’t restart containers during host updates

This is maintenance and orchestration: drain nodes, stagger reboots, configure systemd/docker upgrade behavior, and avoid “helpful” automation
that restarts everything on package upgrades. Pinning helps, but doesn’t stop restarts.

3) Don’t deploy new application versions without an explicit change

This is release engineering: decouple build from deploy, require a config change (Git commit / change request) to roll out a new digest, and
make rollbacks boring.

This article focuses on (1) and (3), with enough operational reality for (2) to not blindside you.

Facts and short history: why this keeps happening

Some context helps, because this problem is not user error; it’s a property of how images, tags, and pulling work.

  • Fact 1: Docker image tags are mutable pointers. The registry can move :latest (or :1.2) to a new digest at any time.
  • Fact 2: The only immutable identifier you can actually trust for “same bytes” is the content digest (e.g., @sha256:…).
  • Fact 3: Docker’s original UX made tags feel like versions, which trained a whole industry to treat mutable strings as immutable facts.
  • Fact 4: The v2 Docker Registry API (mid-2010s) formalized manifests and digests, which is why we can pin by digest today without hacks.
  • Fact 5: “Latest” is not a version; it’s a marketing term. Registries never promised semantics for it, and they kept that promise.
  • Fact 6: Multi-arch images complicated “same tag” further: the same tag can map to different manifests per architecture.
  • Fact 7: Pull behavior changed over time across Docker Engine, Compose versions, and orchestrators—so “it didn’t used to do that” can be true.
  • Fact 8: Modern supply-chain features (SBOMs, provenance attestations, signature verification) assume you can name an immutable artifact. Digests are that name.
  • Fact 9: “Reproducible builds” are still rare in container land. Even if your Dockerfile is unchanged, rebuilding often produces a different digest.

If you’re looking for a single villain, it’s not Docker. It’s the combination of mutable pointers plus automation that assumes pointers are stable.

Failure modes: how containers “update themselves”

Pull-on-recreate: the most common footgun

A container doesn’t silently morph into a new image while running. What happens is more banal: something recreates it (Compose up, Swarm update,
Kubernetes rollout, node reboot reschedule), and the runtime pulls whatever the tag points to at that moment.

If you used image: vendor/app:1.4 and the vendor retagged 1.4 to include a security fix, your next recreate will pull new bytes.
That might be good. It might be catastrophic. It is definitely not controlled by your change management.

Host updates that restart the Docker daemon

Package updates can restart dockerd. Depending on your restart policies, containers come back. If your orchestrator recreates them, tags get re-resolved.
If you’ve pinned by digest and the node still has the image locally, you get the same bytes. If you rely on tags and allow pulls, you get roulette.

“Cleanup” jobs that remove images

Image pruning is healthy until it isn’t. If a node prunes unused images and later needs to recreate a container, it must pull again. That’s when tags mutate under you.

Registry-side retagging and force-push culture

Some orgs treat tags like branches and rewrite them. If your deployment references tags, you signed up for “whatever is on that branch right now,” whether you meant to or not.

Joke #1: Using :latest in production is like naming your only backup file final_final_really_final.zip. It’s a vibe, not a plan.

Orchestrator defaults that encourage pulling

Kubernetes with imagePullPolicy: Always will always talk to the registry even if the digest is present. Compose has its own pull defaults depending on the command.
The knobs exist, but the defaults are optimized for convenience, not incident postmortems.

Pinning options: tags, digests, and signed references

Option A: Pin by digest (recommended for “don’t change bits”)

Use repo/name@sha256:…. That digest identifies a manifest (often a multi-arch index), which in turn points to image layers. It’s immutable by definition.
If you pull by digest, you get the same bytes tomorrow, next week, and during that 3 a.m. rollback.

The downside is human ergonomics. Digests are long, ugly, and not meaningful in a code review. That’s why responsible pinning uses both:
a human-friendly tag for intent, and a digest for immutability.

Option B: Pin by “immutable tags” (acceptable if you enforce immutability)

If your registry and org policy guarantee that tags like 1.2.3 are never rewritten, then pinning by semver tag can be workable.
But understand what you’re trusting: a social contract, not cryptography.

Option C: Pin by digest and verify signatures (best when you can afford it)

Digests solve “same bytes.” They do not solve “bytes you should trust.” That’s where signing and provenance enter: you can require that
a digest is signed by your CI identity and attached to an SBOM/provenance statement.

You don’t need to boil the ocean. Start with digest pinning. Add signature enforcement once the basics stop waking you up at night.

What “responsibly” means

Responsible pinning isn’t “never update.” It’s “updates happen only when we decide, and we can explain which bits are running.”
You want:

  • Immutable references in production (digests).
  • A promotion workflow that moves tested artifacts forward.
  • Rollback that reuses known-good digests.
  • Registry retention policies that don’t delete what prod uses.

Docker Compose and Swarm: stop surprise pulls

Compose: make recreation deterministic

Compose is frequently used like an orchestrator but behaves like a convenience tool. That’s fine—until you run it under automation on a schedule.
Then you get “why did it pull” surprises.

For Compose, the practical strategy is:

  • Reference images by digest in compose.yaml for production.
  • Use explicit docker compose pull during maintenance windows or CI-driven deploys.
  • Avoid “always pull” behavior unless you truly want it.

Swarm: understand service updates and digests

Swarm services can track tags, but they also record resolved digests. You still need to be explicit about updates so you control when a new digest is adopted.

Kubernetes angle: same problem, different knobs

Kubernetes users love to act like Docker tag drama is for smaller shops. Then a Deployment references myapp:stable and a node drains,
and suddenly “stable” means “surprise.”

The controls in Kubernetes are:

  • Pin by digest in the Pod spec: image: repo/myapp@sha256:…
  • Set imagePullPolicy intentionally. For digests, IfNotPresent is usually fine, but beware node cache drift.
  • Use admission policies to reject mutable tags in certain namespaces.

If you pin by digest, a reschedule does not change bytes. If you pin by tag, reschedules become deployments.

Registries, retention, and the digest trap

Pinning by digest introduces a new failure mode: your registry might garbage-collect untagged manifests. Many retention systems treat “untagged” as “safe to delete.”
But when you deploy by digest, you may stop referencing the tag, and the registry thinks the artifact is dead.

The responsible pattern is: keep a tag that tracks the promoted artifact (even if production uses the digest), or configure registry retention
to preserve manifests referenced by digests in use. This is less glamorous than signing, but more likely to prevent your next outage.

Also: if you use a multi-arch index digest, deleting one architecture’s manifest can break pulls on that architecture, even if “it works on amd64.”
The registry does not care about your feelings.

Practical tasks: commands, outputs, and decisions (12+)

These are real things you can do on a host today. Each task includes: a command, what the output means, and the decision you make from it.

Task 1: See what image a running container is actually using

cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.ID}}\t{{.Status}}'
NAMES          IMAGE                 ID           STATUS
api            myco/api:1.9          7c2d1a9f01b2   Up 3 days
worker         myco/worker:1.9       1a0b9c3f9e44   Up 3 days

Meaning: You’re seeing the configured reference (often a tag), not necessarily the digest it resolved to.
Decision: If this shows tags, don’t assume immutability. Move to digest inspection next.

Task 2: Inspect the image ID and repo digests for a container

cr0x@server:~$ docker inspect api --format 'ImageID={{.Image}} RepoDigests={{json .RepoDigests}}'
ImageID=sha256:5c0c0b9d2f2d8e0e5a11c8b3c9fb4f0d0a4c3cc0f55b9c8e0d1f0f2a3b4c5d6e RepoDigests=["myco/api@sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a"]

Meaning: .Image is the local image content ID; RepoDigests shows the registry digest mapping.
Decision: If RepoDigests is present, record it and pin to it in config. If absent, you may be running an image without a registry digest (built locally or pulled without digest metadata).

Task 3: Resolve a tag to a digest without pulling layers (manifest inspect)

cr0x@server:~$ docker buildx imagetools inspect myco/api:1.9
Name:      myco/api:1.9
MediaType: application/vnd.oci.image.index.v1+json
Digest:    sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a

Manifests:
  Name:      myco/api:1.9@sha256:4f3c2b1a...
  Platform:  linux/amd64
  Name:      myco/api:1.9@sha256:7a6b5c4d...
  Platform:  linux/arm64

Meaning: The tag currently points to an index digest; per-arch manifests differ.
Decision: Pin to the index digest if you run mixed architectures; pin to the arch-specific digest if you require absolute determinism per platform.

Task 4: Confirm whether Compose will recreate containers (and thus possibly pull)

cr0x@server:~$ docker compose up -d --no-build
[+] Running 2/2
 ✔ Container stack-api-1     Started
 ✔ Container stack-worker-1  Started

Meaning: “Started” suggests existing containers were started; “Recreated” would mean new containers were made.
Decision: If you see “Recreated” unexpectedly, find out why (config drift, env var change, bind mount change). Recreate is where tag drift bites.

Task 5: Force Compose to use locally cached images only (detect tag drift safely)

cr0x@server:~$ docker compose up -d --pull never
[+] Running 2/2
 ✔ Container stack-api-1     Started
 ✔ Container stack-worker-1  Started

Meaning: Compose refused to pull. If it needed a missing image, it would error.
Decision: Use this in production automation to prevent “oh, it pulled something new because the cache was pruned.”

Task 6: Detect whether a tag changed since last time (compare digests)

cr0x@server:~$ docker buildx imagetools inspect myco/api:1.9 --format '{{.Digest}}'
sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a

Meaning: This is the current digest for the tag.
Decision: Store it in Git (or your deployment system). If it changes without a planned release, treat it as an upstream change event.

Task 7: Pin an image by digest in Compose

cr0x@server:~$ cat compose.yaml
services:
  api:
    image: myco/api@sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a
    restart: unless-stopped

Meaning: The deployment now names an immutable artifact.
Decision: You’ve traded “convenient updates” for “controlled updates.” That’s the point. Build a promotion pipeline so you’re not stuck forever.

Task 8: Verify what digest is running after deployment

cr0x@server:~$ docker compose ps --format json | jq -r '.[].Name + " " + .Image'
stack-api-1 myco/api@sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a

Meaning: Your runtime reference now includes the digest.
Decision: If you still see tags here, your deployment path is rewriting the reference. Fix that before you declare victory.

Task 9: Find recent image pulls and correlate with incidents

cr0x@server:~$ journalctl -u docker --since "24 hours ago" | grep -E "Pulling|Downloaded|Digest"
Jan 03 01:12:44 server dockerd[1123]: Pulling image "myco/api:1.9"
Jan 03 01:12:49 server dockerd[1123]: Downloaded newer image for myco/api:1.9
Jan 03 01:12:49 server dockerd[1123]: Digest: sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a

Meaning: The daemon logs show tag pull activity and the resolved digest.
Decision: If an incident started right after a pull, assume image drift until proven otherwise. Capture the digest for postmortem and rollback.

Task 10: See what images are present and whether pruning might hurt you

cr0x@server:~$ docker images --digests --format 'table {{.Repository}}\t{{.Tag}}\t{{.Digest}}\t{{.ID}}\t{{.Size}}' | head
REPOSITORY   TAG   DIGEST                                                                    ID            SIZE
myco/api     1.9   sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a   5c0c0b9d2f2d  312MB
myco/worker  1.9   sha256:1a2b3c4d5e6f...                                                    9aa0bb11cc22  188MB

Meaning: Digests shown here are what your runtime can use if it doesn’t need to pull.
Decision: If your nodes routinely prune, treat that as “we will pull again,” and pin by digest to prevent tag drift.

Task 11: Dry-run a pull to see if the tag would change (without deploying)

cr0x@server:~$ docker pull myco/api:1.9
1.9: Pulling from myco/api
Digest: sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a
Status: Image is up to date for myco/api:1.9
docker.io/myco/api:1.9

Meaning: “Up to date” means the tag currently resolves to what you already have locally.
Decision: If the digest changes here, stop and treat it as a new release. Don’t let a maintenance reboot become an unreviewed deployment.

Task 12: Prove to yourself that tags are mutable (the uncomfortable demo)

cr0x@server:~$ docker image inspect myco/api:1.9 --format '{{index .RepoDigests 0}}'
myco/api@sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a

Meaning: This is what 1.9 maps to right now. It is not a guarantee about next week.
Decision: If you care about deterministic restarts, stop deploying tags to production. Full stop.

Task 13: Identify containers likely to change on next restart (using tags)

cr0x@server:~$ docker ps --format '{{.Names}} {{.Image}}' | grep -v '@sha256:' | head
api myco/api:1.9
nginx nginx:latest

Meaning: Anything without @sha256: is tag-based and therefore mutable.
Decision: Put these on a remediation list. Start with internet-facing services and auth paths. You know, the ones that ruin weekends.

Task 14: Check restart policies that will amplify daemon restarts

cr0x@server:~$ docker inspect api --format 'Name={{.Name}} RestartPolicy={{.HostConfig.RestartPolicy.Name}}'
Name=/api RestartPolicy=unless-stopped

Meaning: Restart policies determine what comes back when the daemon restarts.
Decision: Keep restart policies, but pair them with digest pinning. “Always restart” + “mutable tag” is how you get surprise redeploys.

Task 15: Validate registry retention risk (list local digests in use)

cr0x@server:~$ docker ps -q | xargs -n1 docker inspect --format '{{.Name}} {{json .RepoDigests}}' | head -n 3
/api ["myco/api@sha256:9d3a0c2e4d8b2b8a9f3f2c1d0e9b8a7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a"]
/worker ["myco/worker@sha256:1a2b3c4d5e6f..."]
/metrics ["prom/prometheus@sha256:7e6d5c4b3a2f..."]

Meaning: This is the real set of artifacts production depends on.
Decision: Ensure your registry keeps these digests. If your retention policy deletes “untagged,” make sure these digests remain tagged or exempted.

Task 16: Catch “helpful” automation like Watchtower (or equivalents)

cr0x@server:~$ docker ps --format '{{.Names}} {{.Image}}' | grep -i watchtower
watchtower containrrr/watchtower:1.7.1

Meaning: An auto-updater is running on the host.
Decision: Either remove it from production, scope it tightly, or make it digest-aware with an approval gate. Uncontrolled updates are the opposite of SRE.

Fast diagnosis playbook

When you suspect “containers changed after updates,” don’t start by arguing about tags in a chat thread. Verify facts quickly and isolate the bottleneck:
is this image drift, config drift, or resource pressure?

First: did the running digest change?

  • Check current container RepoDigests via docker inspect.
  • Compare with your last-known-good digest (from Git, change logs, or incident notes).
  • If digests differ, treat it as a deployment. Start rollback or forward-fix.

Second: what triggered the restart/recreate?

  • Check journalctl -u docker for daemon restarts and pulls.
  • Check Compose/Swarm/Kubernetes events for reschedules and recreates.
  • Look for package upgrade logs that restarted docker/containerd.

Third: if the digest didn’t change, what else changed?

  • Config drift: env vars, mounted config files, secrets, DNS, certificates.
  • Kernel/host changes: cgroup behavior, iptables/nft, MTU, storage driver changes.
  • Resource constraints: disk full, inode exhaustion, CPU throttling.

Fourth: decide whether to freeze or move forward

  • If you’re running tags: freeze by switching to digest, then investigate upstream tag movement safely.
  • If you’re already pinned: your incident is probably not “mystery update.” Spend time where it matters.

Common mistakes: symptom → root cause → fix

1) “Nothing changed” but behavior changed after a reboot

Symptom: After node reboot, containers behave differently; version banners don’t match expectations.

Root cause: Containers recreated and pulled a different digest because the tag moved, or the local image was pruned.

Fix: Pin by digest in production configs. Use docker compose up --pull never in automation. Stop pruning without understanding redeploy consequences.

2) Rollback didn’t roll back

Symptom: You “rolled back” to :stable but the bug remains, or changes shape.

Root cause: You rolled back to a tag that already advanced; tags aren’t snapshots.

Fix: Roll back to a recorded digest. Store digests per release in Git or your deploy tool.

3) Only ARM nodes are broken

Symptom: amd64 is fine; arm64 nodes crashloop after reschedule.

Root cause: Multi-arch index changed or one arch manifest got deleted by retention/GC.

Fix: Pin to the multi-arch index digest and protect it from GC; verify each architecture manifest exists before promotion.

4) “We pinned” but still got drift

Symptom: Compose file shows a digest, yet nodes pull something else.

Root cause: A wrapper script or templating system rewrote the image reference to a tag at runtime, or a different Compose file got used.

Fix: Print the effective config in CI/CD, and assert @sha256: presence. Treat “rendered config” as the deploy artifact.

5) Production can’t pull the pinned digest anymore

Symptom: New node fails with “manifest unknown” for a digest you deployed last month.

Root cause: Registry retention deleted the manifest because it was untagged, or a repo was cleaned aggressively.

Fix: Keep “release tags” pointing at promoted digests, or configure retention exemptions for production-used digests. Audit retention regularly.

6) Security team hates pinning

Symptom: “You pinned images, so you’ll never patch.”

Root cause: Pinning was adopted as a freeze, not as a controlled promotion workflow.

Fix: Pin in production, but update through CI-driven promotion. Add scheduled rebuilds, scanning gates, and explicit rollouts.

Three corporate mini-stories from the trenches

Mini-story 1: An incident caused by a wrong assumption

A mid-sized B2B SaaS company ran Docker Compose on a few beefy VMs. They used myco/api:2.3 everywhere and assumed “2.3 means 2.3.”
The vendor team that owned the image had a different assumption: they treated 2.3 like a minor line and regularly rebuilt it with patched base images.

One Tuesday, the infrastructure team rolled out host security updates. Docker restarted. Containers restarted. The API came back up—mostly.
The symptom was weird: a subset of clients got 401s, then retries worked, then failed again. It looked like rate limiting or a token cache issue.

The on-call did the usual: check app logs, check Redis, check the load balancer. The graphs suggested elevated latency, not outright failures.
Meanwhile, clients escalated because “intermittent auth issues” is the kind of bug that melts trust.

The breakthrough was embarrassingly simple. Someone compared the running digest on one host to another and found they were different.
Half the fleet had pulled a newly rebuilt 2.3 image; half still had the previous digest cached and hadn’t needed to pull.
The cluster was now a mismatched canary test nobody asked for.

Fixing it took less time than explaining it: they pinned production to a digest, rolled the whole fleet to one known artifact, and stabilized.
The longer-term fix was cultural: “tag equals version” became a forbidden assumption unless the registry enforced immutability.

Mini-story 2: An optimization that backfired

Another company had a sensible goal: reduce disk usage on nodes. Their containers produced lots of image churn, and the storage team complained
about bloated root volumes and slow backups. So they added aggressive nightly cleanup: docker system prune -af on every node.

At first, it worked. Disk alerts went away. Everyone high-fived and went back to ignoring capacity planning. Then a quiet dependency got upgraded:
a node rebooted after a kernel patch, and a few services recreated. Those services were tag-based, and now all images had to be pulled fresh.

The registry was slightly rate-limited, and the network path to it was not as fat as people believed. Rolling restarts became slow restarts.
More importantly, one upstream tag had moved to a digest that included a library change with different TLS defaults. The service didn’t crash; it just failed to talk to an upstream that used ancient ciphers.

The incident wasn’t “disk cleanup broke prod.” The incident was “disk cleanup turned every restart into a redeploy.”
The fix was to separate concerns: keep cleanup, but pin images by digest and keep a small cache of known production digests on each node.
They also stopped pretending that registries are infinitely available at infinite speed.

Joke #2: docker system prune -af in cron is the operational equivalent of shaving with a chainsaw. It’s fast—right up until it isn’t.

Mini-story 3: A boring but correct practice that saved the day

A regulated enterprise ran hundreds of containers, and their change control was… let’s call it “enthusiastic.” Engineers grumbled about the paperwork,
but the release process had one underrated feature: every deployment artifact included the resolved image digests, stored alongside the config.

One morning, a critical service started throwing segmentation faults after a node failure triggered rescheduling. Everyone suspected
“bad hardware” or “kernel regression.” The crash signature was in a native dependency, so the finger-pointing began.

The SRE on-call did something deeply unsexy: compared the digests in the deployment record to the digests running on the new node.
They didn’t match. The node had pulled a tag that had advanced overnight because a build pipeline re-published “stable” after tests.

Because the org had the digest recorded, rollback was immediate and precise: redeploy the previous digest. No guessing. No “try last week’s tag.”
Service recovered quickly, and the incident was downgraded from “infrastructure panic” to “release control bug.”

The postmortem outcome wasn’t glamorous either: enforce digest pinning, stop using “stable” for production, and require approvals for retagging.
Boring practices don’t get conference talks, but they do get you back to sleep.

Checklists / step-by-step plan

Step-by-step: move a Compose deployment from tags to digests (without chaos)

  1. Inventory what’s running. Gather container names, current images, and current repo digests.
  2. Resolve tags to digests. For each tag used in production, record the digest it currently maps to.
  3. Update Compose files. Replace repo:tag with repo@sha256:….
  4. Deploy with pulls disabled. Use docker compose up -d --pull never to avoid accidentally adopting a moved tag.
  5. Verify the running reference. Confirm @sha256: appears in docker compose ps and docker inspect.
  6. Record digests in Git/change logs. Make it easy to answer “what’s running?” without SSHing into prod.
  7. Fix your retention policy. Ensure promoted digests don’t get garbage collected.
  8. Build a promotion workflow. CI builds once, tags immutably, and “promotes” by moving a release tag or updating config with a digest.

Production checklist: before you patch hosts

  • Confirm production uses digests, not tags.
  • Ensure nodes have enough disk so you’re not forced into emergency prune.
  • Confirm registry reachability and credentials from the nodes.
  • Disable auto-updaters on production nodes (or restrict them to non-critical stacks).
  • Plan for daemon restart: know which containers will restart and in what order.

Release checklist: before you move a new image into production

  • Promote a specific digest that passed tests; don’t promote a tag that might move.
  • Verify the digest exists for required architectures.
  • Scan the image and record SBOM/provenance if your org requires it.
  • Keep an explicit rollback digest ready (and tested).

Operating principle worth printing out

paraphrased idea from Gene Kim: making changes smaller and more controlled reduces risk and speeds recovery.
Pinning by digest makes restarts boring; controlled promotion makes updates deliberate.

FAQ

1) If containers don’t change while running, why did my app change “without deployment”?

Because something recreated the container (daemon restart, node reboot, orchestrator reschedule), and your config referenced a mutable tag.
That recreate implicitly performed a deployment.

2) Is pinning by digest enough to be secure?

It’s enough to be deterministic. Security is separate: you still need timely rebuilds, scanning, and a controlled promotion path.
Digests help security by making it clear exactly which artifact you scanned and approved.

3) Can I keep using tags in dev?

Yes. Dev is where convenience matters. Use tags like :latest if it speeds iteration. But make the promotion boundary strict:
production should take digests (or enforced-immutable tags) only.

4) What’s wrong with using :1.2.3 tags?

Nothing—if your registry and org policy enforce immutability, and you audit it. If not enforced, you’re trusting that nobody ever retags,
force-pushes, or “rebuilds 1.2.3 with a quick fix.”

5) Will pinning by digest break automated security patching of base images?

It changes it from “silent” to “explicit.” You should rebuild and promote new digests on a schedule. That’s healthier than waking up to a surprise.

6) How do I roll back safely?

Roll back by deploying the previous known-good digest, not by “rolling back” to a tag name. Store digests per release so rollback is a single config change.

7) Does Compose always pull on up?

Not always. Behavior depends on the command and flags. If you want to prevent pulling in production, use --pull never.
If you want to refresh deliberately, run docker compose pull as a separate step.

8) What about Kubernetes—should I always use digests there too?

For production, yes, unless you have strict tag immutability and admission control. Digest pinning prevents “reschedule equals redeploy.”
If you do use tags, treat every node event as a potential rollout.

9) Why did my pinned digest fail to pull on a new node months later?

Registry retention or garbage collection likely deleted the manifest (often because it was untagged). Fix retention policies and keep release tags
pointing at promoted digests.

10) Do I pin the index digest or the arch-specific digest?

If you run mixed architectures, pin the index digest so each node gets the correct platform manifest.
If you run a single architecture and want maximum determinism, pin the platform-specific digest.

Conclusion: next steps you can do this week

If you remember one thing: tags are convenient names, not immutable versions. If your production config references tags, your next host update can become a deployment.
That’s not “DevOps.” That’s gambling with better branding.

Practical next steps:

  • Inventory production containers that still run tag-based images (no @sha256:).
  • Pick one critical service and pin it by digest in its deployment config.
  • Update your automation to deploy with pulling disabled by default (--pull never) and to promote new digests explicitly.
  • Audit registry retention so pinned digests remain pullable for months, not days.
  • Record digests per release so rollback is a change, not a scavenger hunt.

You don’t need perfect supply-chain tooling to stop surprise updates. You need determinism, discipline, and a refusal to treat mutable pointers like contracts.

← Previous
Email “550 rejected”: what it actually means and how to get unblocked
Next →
Docker “Cannot connect to the Docker daemon”: fixes that actually work

Leave a comment