Docker: Secrets without leaks — stop putting passwords in .env

Was this helpful?

The incident report always starts the same way: “Credentials may have been exposed.” It’s never “were” at first. We dance around it,
trying to buy certainty with time. Then someone finds the smoking gun: a .env file baked into an image layer, printed by a
debug statement, or uploaded to a support ticket because “it was easier.”

Containers didn’t create secret sprawl. They just made it faster. If you’re still using environment variables (or worse, .env)
as your primary secret distribution mechanism, you’re not “keeping it simple.” You’re pre-writing your own postmortem.

What goes wrong with .env and environment variables

Let’s be specific. Environment variables are not “inherently insecure.” They’re inherently promiscuous.
They spread. They get copied, logged, and shown in places you didn’t mean to show them. The difference matters.

1) The container runtime treats env as metadata, and metadata loves to travel

A container’s environment ends up in multiple places: in the orchestrator definition, in inspection output, in process listings,
in crash dumps, and in “helpful” debug pages. Some of those places have their own access controls. Many don’t.

2) env variables are easy to exfiltrate once you have any foothold

If an attacker gets code execution inside a container, reading files under a dedicated secret mount might still require effort,
but reading env is trivial: it’s already there, inherited by the process, usually readable by that process, and often printed
accidentally by standard tooling.

3) .env files are a workflow trap

A .env file makes local development pleasant. It also makes “just commit it for the demo” one Slack message away.
Even if your repository is private, secrets don’t stay private. They get forked, mirrored, cached, and cloned onto laptops
that travel through airports and coffee shops.

One of the most common failure modes is not an attacker. It’s an overworked engineer adding printenv to debug,
then forgetting to remove it. That’s how your “private” database password ends up in centralized logs, searchable by anyone
with read access to the logging system.

Short joke #1: If your secret is in .env, it’s not a secret; it’s a mood.

4) Image layers and build caches will betray you

If you pass secrets as build arguments or copy .env into the build context, you can end up with credentials inside
image layers. Even if you delete them later, the layer history may still contain them. You don’t “rm -f” your way out of a
content-addressed registry.

5) env variables blur the boundary between configuration and secret material

A port number is configuration. A database password is secret material. Treat them differently. The operational problem is that
ENV= looks the same for both, so teams end up giving secrets the same lifecycle as config: checked into repo, templated
in CI, copied across environments. That’s how dev secrets become prod secrets, and how prod secrets end up in dev laptops.

Facts and historical context (the stuff we keep re-learning)

Here are concrete facts and context points that matter when you’re deciding how to manage secrets, and why “but everyone uses env vars”
is not a serious argument.

  1. The “12-factor app” popularized env vars for config to keep builds immutable and deployments portable. It was not
    written as a blanket endorsement for long-lived production secrets.
  2. Docker Swarm introduced built-in secrets (as file mounts) to address the very real problem of env leakage across
    docker inspect and logs.
  3. Kubernetes Secrets were historically just base64-encoded objects by default; encryption at rest required explicit
    configuration. The industry learned (again) that “encoded” is not “encrypted.”
  4. Build systems got better because they had to: BuildKit added secret mounts specifically because passing secrets as
    build args was leaking them into image layers.
  5. Registries are forever: once a secret reaches a registry layer or a cache, deleting the tag doesn’t reliably remove
    the data from all mirrors and caches.
  6. Process environments are observable on many systems. On Linux, a process’s environment can be read via
    /proc/<pid>/environ by sufficiently privileged users. You don’t need to be “root” in the abstract; you need
    the right combination of capabilities and namespace access.
  7. Logging systems changed the blast radius: a single leaked password in container logs now becomes a searchable artifact
    replicated across clusters and retention tiers.
  8. Secret rotation is a reliability feature, not just security theater. Teams that never rotate learn about it during
    an incident, under the worst possible time pressure.

A practical threat model: how secrets leak in containerized systems

Threat modeling doesn’t need a whiteboard and a three-day workshop. You need to list the ways secrets escape, and decide which ones you
can prevent, which ones you can detect, and which ones you can only reduce.

Leak path A: source control and artifact sprawl

  • Where it happens: committed .env, copied sample env with real values, “temporary” branch, internal wiki paste.
  • Why it happens: convenience, confusion between config and secret, lack of scanning gates.
  • Fix type: prevent (gitignore + scanning + policy) and detect (secret scanners, repo monitoring).

Leak path B: CI logs and build metadata

  • Where it happens: pipeline step echoes env, tests print connection strings, build args stored in logs.
  • Why it happens: “debugging,” misconfigured verbosity, naive scripts.
  • Fix type: prevent (masking, no echo, file mounts) and detect (log scrubbing and alerting).

Leak path C: container inspection and orchestration APIs

  • Where it happens: docker inspect, Compose configs, Swarm service specs, Kubernetes pod specs.
  • Why it happens: access to metadata is broader than access to secrets should be.
  • Fix type: prevent (use secret objects) and reduce (tighten RBAC for inspect/describe).

Leak path D: runtime compromise and lateral movement

  • Where it happens: RCE in app, SSRF to metadata endpoints, exposed debug endpoints.
  • Why it happens: apps are apps; they break.
  • Fix type: reduce (least privilege, short-lived creds, separate identity per service) and detect (anomaly detection).

Leak path E: backups and snapshots

  • Where it happens: volumes with embedded secrets, database dumps containing credentials, filesystem snapshots.
  • Why it happens: secrets stored as regular files without lifecycle controls.
  • Fix type: prevent (don’t store secrets in app data paths), reduce (encrypt backups), detect (audit).

Here’s the operational truth: you’re not only protecting secrets from attackers. You’re protecting them from your own systems’ tendency
to copy, cache, and index everything.

One quote, because it applies to secrets as much as to outages: “Hope is not a strategy.” — Vince Lombardi

What to do instead: file-based secrets, Docker secrets, and sane defaults

The goal isn’t purity. The goal is lowering the probability and the blast radius of leaks. The best default in container platforms is:
mount secrets as files, keep them out of image layers, keep them out of logs, and make rotation a normal deploy.

Option 1: Docker Swarm secrets (still useful even if you don’t “use Swarm”)

Docker secrets in Swarm are a first-class mechanism: encrypted at rest in the Swarm raft log, delivered to tasks over mutual TLS,
and mounted into containers as in-memory files (typically under /run/secrets). They’re not visible in
docker inspect the way env vars are.

The big benefit: secrets are data with lifecycle, not strings sprinkled into YAML.

Option 2: Docker Compose secrets (with caveats)

Compose supports secrets in the spec, but behavior depends on the backend. With the local Docker engine (non-Swarm),
Compose secrets often map to bind mounts, which is better than env vars but still means the secret exists on disk somewhere.
That might be acceptable for development and small deployments if you do it consciously.

Option 3: External secret managers (best for serious production)

If you have more than one cluster, more than one team, or compliance requirements, you want a dedicated secret manager
(cloud provider secret store, vault-style system, or HSM-backed service). The runtime then fetches short-lived credentials
using an identity, not a shared static password.

This article is Docker-focused, but the principle is universal: identity-based access beats shared static secrets.

Option 4: If you must use env vars, contain the damage

  • Use env vars for non-secret configuration only.
  • If you have to pass a secret via env (legacy apps happen), keep it short-lived and rotate aggressively.
  • Never print the environment in logs.
  • Restrict who can inspect containers and read logs. “Read-only” often isn’t harmless.

Short joke #2: Putting passwords in .env is like labeling your house key “HOUSE KEY” and hiding it under the doormat.

How file-based secrets change your app design (in a good way)

Reading a secret from a file forces you to confront lifecycle. You can swap the file. You can rotate it. You can control permissions.
And your app can reload it without a full rebuild.

A practical pattern:

  • Mount secret as /run/secrets/db_password
  • App reads it at startup, optionally re-reads on SIGHUP or on interval
  • Secret rotation becomes: update secret object → restart tasks or trigger reload

Hands-on tasks: commands, outputs, and decisions (12+)

This is the part people skip. Don’t. The difference between “we use secrets” and “we actually don’t leak them” is verification.
Each task below includes a command, sample output, what the output means, and the decision you make.

Task 1: Find .env files that shouldn’t exist

cr0x@server:~$ find . -maxdepth 4 -type f -name ".env" -o -name "*.env"
./.env
./services/api/.env

Meaning: You have environment files in the tree; at least one is at repo root, which is where accidents happen.

Decision: Keep only non-secret sample files (.env.example) in the repo; remove real ones from version control
and rotate anything that ever lived there.

Task 2: Check whether secrets are tracked in git history

cr0x@server:~$ git log --name-only --pretty=format: | grep -E '(^|/)\.env$' | head
.env
services/api/.env

Meaning: Those files were committed at some point. Even if deleted now, they may be in history and clones.

Decision: Treat all credentials that ever appeared there as compromised; rotate. Then clean history only if you
understand the operational blast radius of rewriting git history.

Task 3: Search for common secret patterns in the repo

cr0x@server:~$ grep -RIn --exclude-dir=.git -E "(PASSWORD=|API_KEY=|SECRET=|TOKEN=|BEGIN PRIVATE KEY)" .
./services/api/.env:3:DB_PASSWORD=summer2023

Meaning: You have at least one literal secret in plaintext. If it’s in a working tree, it’s probably somewhere else too.

Decision: Remove it, rotate it, add scanning gates, and stop treating grep as your security program.

Task 4: Inspect a running container for secret leakage via env

cr0x@server:~$ docker ps --format "table {{.Names}}\t{{.Image}}\t{{.Status}}"
NAMES          IMAGE                STATUS
api-1          myorg/api:1.8.2      Up 3 hours
db-1           postgres:16          Up 3 hours
cr0x@server:~$ docker inspect api-1 --format '{{json .Config.Env}}'
["NODE_ENV=production","DB_USER=app","DB_PASSWORD=summer2023","DB_HOST=db"]

Meaning: The password is visible to anyone with permission to inspect containers. That permission is often wider than you think.

Decision: Remove secret env vars; replace with secret file mounts; tighten access to the Docker API.

Task 5: Check if the secret is present inside the container as a file (the better pattern)

cr0x@server:~$ docker exec api-1 ls -l /run/secrets
total 4
-r--r----- 1 root root 16 Jan  3 10:12 db_password

Meaning: You have a secret file mounted with restrictive permissions. Good start.

Decision: Ensure your app runs as a user that can read it (group membership), not as root “because it works.”

Task 6: Verify the application is not logging environment variables

cr0x@server:~$ docker logs --tail 200 api-1 | grep -E "(DB_PASSWORD|API_KEY|SECRET|TOKEN)" || echo "no obvious secrets in tail"
no obvious secrets in tail

Meaning: No obvious secret strings appear in the last 200 lines. That’s not proof, but it’s a basic sanity check.

Decision: Keep going: check structured logs, error paths, and startup banners. Then add automated checks in CI.

Task 7: Detect secret exposure in Compose config rendering

cr0x@server:~$ docker compose config | sed -n '1,120p'
services:
  api:
    environment:
      DB_HOST: db
      DB_PASSWORD: ${DB_PASSWORD}

Meaning: Compose is still wiring secrets through environment. Even if sourced from your shell, it ends up in config.

Decision: Move secrets out of environment and into secrets: with file mounts.

Task 8: Create and use a Swarm secret (real secret lifecycle)

cr0x@server:~$ docker swarm init
Swarm initialized: current node (r8k3t2...) is now a manager.
cr0x@server:~$ printf "correct-horse-battery-staple\n" | docker secret create db_password -
z1p8kq3m9gq9u5a0l0xw2v3p1

Meaning: The secret now exists in the Swarm control plane. You didn’t write it to disk.

Decision: Use this for production secret distribution if Swarm fits your operating model; otherwise use an external secret manager.

Task 9: Confirm secrets are mounted where you expect (and not in env)

cr0x@server:~$ docker service create --name api --secret db_password --env DB_USER=app alpine:3.20 sh -c "env | grep -E 'DB_PASSWORD' || echo 'no DB_PASSWORD in env'; ls -l /run/secrets"
no DB_PASSWORD in env
total 4
-r--r----- 1 root root 29 Jan  3 10:20 db_password

Meaning: The secret is available as a file, not as an environment variable. That’s what you want.

Decision: Update the application to read the file and stop expecting an env var.

Task 10: Rotate a Swarm secret (the correct painful part)

cr0x@server:~$ docker secret ls
ID                          NAME          CREATED          UPDATED
z1p8kq3m9gq9u5a0l0xw2v3p1    db_password   5 minutes ago    5 minutes ago
cr0x@server:~$ printf "new-password-value\n" | docker secret create db_password_v2 -
m2c1v0b8n7x6a5s4d3f2g1h0j9
cr0x@server:~$ docker service update --secret-rm db_password --secret-add source=db_password_v2,target=db_password api
api
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

Meaning: You created a new secret and updated the service to use it. Swarm secrets are immutable; rotation is replace-and-redeploy.

Decision: Bake rotation into deploy runbooks. If your system can’t tolerate a task restart, fix that first.

Task 11: Check whether a secret accidentally got into an image layer

cr0x@server:~$ docker history --no-trunc myorg/api:1.8.2 | head -n 8
IMAGE                                                                     CREATED        CREATED BY                                      SIZE      COMMENT
sha256:3b1c...                                                             2 days ago     /bin/sh -c #(nop)  COPY . /app               14.2MB
sha256:a9f0...                                                             2 days ago     /bin/sh -c #(nop)  RUN npm ci                 48.1MB

Meaning: If your .env is in the build context and copied into the image, it might be embedded in that COPY layer.

Decision: Ensure .dockerignore excludes secret files; rebuild and redeploy; rotate anything that might have been copied.

Task 12: Validate .dockerignore blocks obvious secret files

cr0x@server:~$ cat .dockerignore
.env
*.env
**/.env
**/*.env
id_rsa
*.pem

Meaning: You’re explicitly excluding common secret carriers from the build context.

Decision: Keep it. Also validate that your CI doesn’t inject secrets into the build context anyway.

Task 13: Check who can talk to the Docker socket (aka “who can become root”)

cr0x@server:~$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Jan  3 08:01 /var/run/docker.sock
cr0x@server:~$ getent group docker
docker:x:998:deploy,ci-runner

Meaning: Members of the docker group effectively have root-equivalent control on the host.
If they can inspect containers, they can read env, mount filesystems, and extract secrets.

Decision: Treat Docker socket access as privileged; shrink membership, audit it, and isolate CI runners.

Task 14: Confirm a secret isn’t being passed via Compose --env-file

cr0x@server:~$ ps aux | grep -E "docker compose.*--env-file" | grep -v grep
deploy    21984  0.2  0.1  23844  9152 ?        Ss   10:02   0:00 docker compose --env-file /srv/app/.env up -d

Meaning: Someone is explicitly injecting an env file at runtime. That file likely lives on disk on the host, and may be backed up.

Decision: Replace with secrets mounted from a protected location, and remove host-stored plaintext env files from backup paths.

Task 15: Scan logs for high-risk patterns without dumping everything

cr0x@server:~$ docker logs api-1 2>&1 | grep -E "password=|Authorization: Bearer|BEGIN PRIVATE KEY" | head
Authorization: Bearer eyJhbGciOi...

Meaning: You have bearer tokens in logs. That’s a live credential in many systems, and now it’s a searchable artifact.

Decision: Treat as incident: rotate tokens, scrub/limit logs, add log filters, and fix the code path that logs headers.

Fast diagnosis playbook

When you suspect secret leakage, you don’t have time for philosophy. You need a tight loop: confirm, scope, contain, rotate, and prevent recurrence.
Here’s the order that wins most often.

First: confirm exposure paths (minutes)

  1. Check container metadata: docker inspect for env secrets. If present, assume exposure to anyone with Docker API access.
  2. Check logs for cleartext: search for tokens/password patterns. If found, treat logs as compromised data stores.
  3. Check CI output: the last successful pipeline for “helpful” debug steps or masked-secret failures.

Second: scope blast radius (tens of minutes)

  1. Where else did the secret go? image layers, artifact stores, support bundles, wiki pages, chat logs.
  2. Who has access? Docker socket users, orchestrator API readers, log readers, registry readers.
  3. Is it reusable? static passwords are worse than short-lived tokens. But short-lived tokens in logs are still bad.

Third: contain and rotate (hours)

  1. Rotate credentials starting with the most powerful (cloud keys, database admin creds, signing keys).
  2. Redeploy to remove env usage and switch to file-based secrets or external secret retrieval.
  3. Reduce observability surfaces: tighten RBAC, reduce log verbosity, block printenv patterns via linting.

Bottleneck reality: most teams spend the first hour arguing whether it’s “a real leak.” Stop that. Rotate first, debate later.

Common mistakes: symptoms → root cause → fix

Mistake 1: “We use .env but it’s not committed”

Symptoms: secret shows up in a PR diff, or a developer laptop compromise triggers a credential rotation fire drill.

Root cause: secrets stored as plaintext files in working directories get copied, zipped, attached, and backed up.

Fix: keep .env for non-secret local config only; move secrets to OS keychain, external manager, or Compose/Docker secret files.

Mistake 2: “It’s fine, only ops can run docker inspect”

Symptoms: auditors ask who has Docker access and you can’t answer quickly; contractors can read container env.

Root cause: Docker socket access is broader than intended (CI runners, “temporary” access, shared jump boxes).

Fix: treat Docker API as privileged; minimize docker group membership; isolate CI; use secrets not env.

Mistake 3: Passing secrets at build time

Symptoms: password appears in docker history or is recoverable from registry layers.

Root cause: using ARG or copying secret files into build context; deleting later doesn’t erase layers.

Fix: use BuildKit secret mounts for build-only needs; never bake runtime secrets into images; enforce .dockerignore.

Mistake 4: “We mounted a secret file, so we’re done”

Symptoms: secret still ends up in logs, or app crashes and dumps config including secret file contents.

Root cause: the app reads the secret and prints it (directly or indirectly) during error handling, or debug endpoints expose config.

Fix: redact sensitive fields; disable debug endpoints in prod; add tests that fail if secrets appear in logs.

Mistake 5: Long-lived shared passwords across environments

Symptoms: dev leak triggers prod rotation; teams fear rotation because it breaks everything.

Root cause: single credential reused across dev/stage/prod and across services; no identity boundaries.

Fix: unique credentials per environment and per service; prefer short-lived tokens; implement rotation as routine deploy.

Mistake 6: Storing secrets in volumes that get backed up

Symptoms: backup restores reveal old credentials; security asks why secrets exist in snapshots.

Root cause: secrets written into app data directories; backup system captures everything.

Fix: mount secrets in dedicated runtime paths like /run/secrets; exclude secret paths from backups; encrypt backups anyway.

Three corporate mini-stories from the trenches

Mini-story 1: An incident caused by a wrong assumption

A mid-sized SaaS company had a clean separation between “platform” and “application” teams. The platform team owned the Docker hosts and CI.
The application team owned the service code and Compose files. Everyone believed their boundary was safe.

The application team set DB_PASSWORD via environment variables in Compose, sourced from a protected CI variable store. They assumed:
“The CI store is secure, therefore the secret is secure.” They weren’t being reckless. They were being literal.

The platform team added a troubleshooting job for on-call: a script that ran docker inspect and archived output during incidents.
It helped diagnose memory limits, restart loops, and image tags. It also archived every environment variable for every container, including
database passwords and API tokens. The archive went to an internal object store used for incident artifacts.

Weeks later, a contractor was granted read access to incident artifacts so they could help with performance investigations. They didn’t do
anything malicious. They just had access to what was there. A security review found secrets inside those artifacts, and now the company had
a disclosure problem plus a rotation project under deadline.

The wrong assumption wasn’t “contractors are bad.” It was “secrets in env are only visible at runtime.” In practice, env secrets become
metadata, and metadata becomes artifacts. The fix was boring: switch to file-based secrets, redact diagnostic bundles, and treat
“inspection output” as sensitive.

Mini-story 2: An optimization that backfired

Another company ran a fleet of small services. Deployments were frequent, and they wanted faster rollouts. Someone proposed an “optimization”:
build once, deploy everywhere, and avoid restarts by making the app reload config dynamically from environment variables. It sounded clever.

They added a lightweight endpoint for support: /debug/config. It returned the effective configuration to help diagnose misroutes,
feature flags, and upstream endpoints. It was gated behind an internal network, and “only SREs” could reach it. You can already see where this goes.

A routing change accidentally exposed that endpoint through a shared internal proxy used by multiple teams. Not the public internet, but a wide
enough audience. Someone troubleshooting their own service hit the endpoint, saw a JSON blob with credentials, and reported it.

The optimization backfired because env-based “dynamic config” pushed secrets into the same channel as regular config. The debug endpoint wasn’t
supposed to expose secrets, but it didn’t have a clean separation. It just dumped config.

The remediation was painful but straightforward: remove secrets from env, mount them as files, redesign debug output to explicitly exclude
secret material, and introduce per-service identities so accidental exposure doesn’t become full-fleet compromise.

Mini-story 3: A boring but correct practice that saved the day

A financial-ish company (regulated enough to care, not regulated enough to have infinite headcount) adopted a policy early:
“All secrets are files, all files live under a single directory, and that directory is never included in diagnostic bundles.”
It was the kind of rule people complain about until it saves them.

They used Docker Swarm secrets for a subset of workloads and an external secret manager for the rest. Either way, the runtime path was consistent:
/run/secrets/<name>. Applications were required to support reading from that path. It wasn’t optional.

During a messy outage, a senior engineer asked for everything: container inspect output, logs, and filesystem snapshots of a misbehaving node.
The team collected them quickly. Security reviewed the bundle before sharing it with a vendor. The review found no credentials.
Not because people were careful in the moment, but because the system was designed to make “careful” the default.

The boring practice did two things: it reduced the number of places secrets could appear, and it made auditing easier. When you can say,
“secrets live here and only here,” you can scan that boundary, enforce permissions, and exclude it from backups and diagnostics.

Nobody wrote a blog post about that policy internally. It was too dull. It still prevented a secondary incident during a primary incident,
which is the kind of win that doesn’t show up on dashboards but keeps companies alive.

Checklists / step-by-step plan

Step-by-step plan: migrate from .env to file-based secrets (Docker-focused)

  1. Inventory secrets currently in env: list which containers/services have secret-like env variables
    (passwords, tokens, private keys).
  2. Define a standard secret mount path: pick /run/secrets and stick to it.
  3. Update applications: teach apps to read DB_PASSWORD_FILE=/run/secrets/db_password or directly read the file.
    Prefer the “_FILE” convention if you need to keep config env-based without embedding secret values.
  4. Implement secrets in your platform:

    • Swarm: docker secret create + service secret mounts.
    • Compose (non-Swarm): use secrets: if supported, otherwise bind mount from a protected host directory with strict permissions.
    • External manager: fetch at runtime using identity; write to tmpfs; mount into container.
  5. Rotate as you migrate: do not reuse values “just for the cutover.” Assume old env paths are compromised.
  6. Lock down observability surfaces: remove config dump endpoints; redact logs; restrict who can inspect containers.
  7. Add CI guardrails: fail builds if .env or secret patterns are detected in the build context or repo.
  8. Practice rotation: run a rotation drill quarterly. The first time you rotate should not be during an incident.

Checklist: what “good” looks like in production

  • Secrets are not present in docker inspect output.
  • Secrets are not in images (verified by docker history and build context controls).
  • Secrets are not in logs (spot-checked and automatically scanned).
  • Rotation is a routine deployment operation, with runbooks and tested rollback.
  • Access to Docker socket / orchestrator metadata is audited and minimized.
  • Secrets are unique per environment and ideally per service identity.

FAQ

1) Are environment variables ever acceptable for secrets?

Sometimes, for legacy apps, short-lived tokens, or a transitional phase. But treat it as technical debt with a deadline.
If it’s long-lived and high-impact (database admin password, signing key), don’t put it in env.

2) Isn’t a mounted secret file still readable by the app, so what’s the difference?

The difference is exposure surface. Env leaks into metadata, inspection output, and accidental logging patterns more often.
Files can be permissioned, excluded from diagnostics, and rotated by replacement without rewriting configs.

3) Do Docker secrets work without Swarm?

Not in the full Swarm-managed sense. Compose supports a secrets concept, but depending on mode it may become a bind mount.
That can still be better than env vars, but you must understand where the plaintext lives on disk.

4) What about DB_PASSWORD_FILE patterns—are those secure?

They’re a pragmatic compromise: env contains only a file path, not the secret value. The secret still needs a secure source and mount.
This pattern also keeps application config consistent across platforms.

5) How do I prevent secrets from being baked into images?

Exclude secret files with .dockerignore, avoid passing secrets via ARG, and use BuildKit secret mounts for build-time needs.
Then verify with docker history --no-trunc and registry scanning.

6) How should we rotate secrets with minimal downtime?

Use two-phase rotation where possible: introduce a new credential, deploy support for both, switch traffic, then remove the old one.
For Swarm secrets, that typically means creating _v2 and updating the service, then retiring _v1.

7) If someone saw the secret in logs once, do we really need to rotate?

Yes. Logs replicate, persist, and get accessed for reasons unrelated to security. If it appeared once, you can’t guarantee it’s fully gone.
Rotate and fix the logging path.

8) What permissions should secret files have inside containers?

Read-only for the process that needs them, ideally via group-readable with a dedicated group. Avoid world-readable.
Don’t run the whole app as root just to read a file.

9) How do I convince a team that “private repo” isn’t a secret store?

Ask who has access today, who will have access next quarter, and where clones live. Repos are designed to replicate.
Secret stores are designed to control access and audit it.

10) Do secrets as files solve everything?

No. They reduce common leak paths. You still need least privilege, segmentation, short-lived creds where possible, and logging hygiene.
But it’s a strong default that prevents a lot of self-inflicted wounds.

Conclusion: next steps that actually reduce risk

Stop treating .env as a harmless convenience. In production, it’s a liability magnet: it leaks into repos, logs, artifacts,
and inspection output. And the worst part is how normal it feels—until it doesn’t.

Practical next steps:

  1. Audit running services for secret env vars using docker inspect; remove them.
  2. Switch secrets to file mounts (/run/secrets) via Swarm secrets, Compose secrets, or an external secret manager.
  3. Rotate anything that ever lived in .env, CI logs, or diagnostic bundles.
  4. Lock down who can access the Docker socket and who can read logs; audit both.
  5. Make rotation a routine deploy action, not an emergency ritual.

You don’t need perfect security. You need systems that don’t casually copy your crown jewels into every tool that happens to be nearby.
That’s what “secrets without leaks” means in the real world.

← Previous
MariaDB vs PostgreSQL for Ecommerce Writes: Who Chokes First (and How to Prevent It)
Next →
ZFS Log Analysis: Finding Slowdowns Before They Become Outages

Leave a comment