If you run docker compose up and get told that your Compose version is obsolete, you’ve just been handed a classic ops problem: a warning that feels harmless until it isn’t. Teams ignore it for months, then a laptop upgrade, CI runner refresh, or a new production host turns that warning into broken builds, different networks, or containers that “worked yesterday” and now don’t.
This is the kind of warning that isn’t about syntax. It’s about expectations. You’re not just editing YAML; you’re negotiating behavior between your file, the Compose CLI, the engine, and the “Compose Specification” that replaced the old file-format versions. Let’s modernize without surprises.
What the warning really means (and what it doesn’t)
The message “version is obsolete” shows up most commonly when you’re using Docker Compose V2 (the docker compose subcommand) and your compose.yaml still carries a version: "2" or version: "3.7" header. Compose V2 implements the Compose Specification and no longer needs that header to select a parser/behavior mode. In fact, the header can mislead humans into thinking it pins behavior. It doesn’t.
Here’s the operational reality:
- Compose file “version” used to select a feature set (especially between 2.x and 3.x).
- Compose Spec moved to a capability model: the CLI decides what it supports; the file describes intent.
- Your Compose file might still behave differently across machines, but the difference usually isn’t the
versionline. It’s the Compose plugin version, the engine version, and the environment.
So: removing version is usually safe. The unsafe part is assuming everything else is fine. The migration is less “delete a line” and more “verify semantics didn’t drift.”
One dry rule that keeps you out of trouble: treat Compose as a deployment tool, not a programming language. YAML is not an SLA.
Compose V1 vs V2: why your muscle memory lies to you
Compose V1 was the old docker-compose Python-based tool. Compose V2 is a Go-based plugin integrated into the Docker CLI as docker compose. Many flags look the same. Some behaviors are close. The edge cases are where outages live.
Joke #1: YAML is where indentation goes to start religious wars, and Compose just wants you to pick a side consistently.
The one quote (and why it applies)
Paraphrased idea — John Allspaw: reliability comes from how systems behave under stress, not from how confident we feel reading the config.
This warning is a nudge to test behavior, not to win an argument with a linter.
A quick timeline: why Compose stopped caring about “version”
Some historical context makes the warning feel less arbitrary and more like a cleanup of legacy sharp edges. Here are concrete facts that explain the trajectory:
- Compose file 2.x and 3.x were not just “newer”; they targeted different worlds. 3.x aligned with Swarm-era design assumptions (notably around deploy keys).
- The
deploysection was introduced to describe orchestration intent, but classic Compose (non-Swarm) ignored most of it, which confused people for years. - Compose V1 (Python) and Compose V2 (Go plugin) have different implementations, which means different corner cases even when YAML is identical.
- Docker moved Compose into the core CLI to reduce tool sprawl, but that also meant faster iteration and more consistent UX with other Docker commands.
- The Compose Specification became a vendor-neutral spec hosted under the OCI ecosystem, aiming to standardize behavior beyond “whatever the Docker tool does today.”
- Old version headers became a compatibility hack: they were used to gate features rather than describe intent. That’s backwards for a spec.
- Features like profiles arrived later and don’t fit neatly into the old 2/3 version taxonomy; the spec approach is more flexible.
- Many teams used Compose as “poor man’s Kubernetes,” which is fine until you mix environments and assume the same semantics everywhere.
The warning isn’t moral judgment. It’s Compose telling you: “Stop pretending a single string controls the behavior. It doesn’t.”
The safe modernization principles
1) Normalize the toolchain first, not last
Before you edit YAML, capture what you’re running today: Docker Engine version, Compose plugin version, and the exact command line used in CI and on production hosts. If those differ, your Compose file is already “multi-platform” whether you intended that or not.
2) Rendered config is the truth
Compose supports variable interpolation, extension fields, multiple files, profiles, and defaults. Humans read YAML. Compose runs the fully resolved model. Your migration should compare rendered config before and after changes.
3) “Works on my laptop” is often a volume permission story
Modernization triggers rebuilds and container recreation. That’s when file ownership, SELinux labels, and storage drivers remind you they exist. Compose modernization is as much about storage as it is about YAML.
4) Pin what must be pinned
Don’t pin everything (that becomes a museum), but do pin the things that change behavior: image tags (or digests), Compose plugin version in CI runners, and project naming conventions.
5) Make “up” boring
A correct Compose file makes docker compose up -d a low-drama operation. If you rely on implicit defaults (networks, container names, build contexts), your future self will pay interest.
Migration plan: from “version:” to Compose Spec
The migration is basically: remove the version key, ensure the file name and layout fit modern expectations, validate with docker compose config, and confirm behavior via a controlled recreation.
Step 1: Rename and standardize file naming
Modern Compose defaults to compose.yaml. It will also read docker-compose.yml, but standardizing reduces “why did it pick that file?” confusion.
Step 2: Remove the version line (but don’t stop there)
Delete:
version: "2"version: "3"version: "3.8"(etc.)
Keep everything else. Then validate rendered config. Your goal is to prove the resulting model is the same.
Step 3: Replace deprecated patterns with spec-friendly ones
Common examples:
links: legacy; modern Compose uses default DNS-based service discovery on networks.depends_onconditions: older patterns usedcondition: service_healthy; V2 has partial support depending on versions. Prefer explicit healthchecks + entrypoint waiting logic for critical dependencies.container_nameeverywhere: looks tidy, breaks scaling and can cause collisions across projects. Use it sparingly.external_links: usually a smell; model networks explicitly instead.
Step 4: Audit storage definitions like you mean it
If you run databases in Compose (you probably do), treat volumes as production data structures. Explicitly name volumes, choose bind mounts intentionally, and understand where the bytes live on disk. A Compose modernization that recreates containers can accidentally orphan volumes or recreate them with new names if you change project naming.
Step 5: Prove it with a controlled recreate
Do not “just run it.” Use a dry-run mindset: render config, pull images, and recreate in a staging environment or a copy of the host. Then compare container metadata (networks, mounts, env vars, healthchecks).
Joke #2: “I only changed a warning” is the unofficial motto of incident retrospectives.
Practical tasks (commands, outputs, decisions)
Below are concrete, runnable tasks that you can do on a host or CI runner. Each includes (1) a command, (2) what the output means, and (3) the decision you make from it. This is the stuff that prevents “it should be the same” from becoming a ticket storm.
Task 1: Confirm you’re using Compose V2 (plugin) and not V1
cr0x@server:~$ docker compose version
Docker Compose version v2.27.1
What it means: You’re running the V2 plugin. The “version is obsolete” warning is expected if your file still has a version: key.
Decision: Migrate to Compose Spec style (remove version) and validate with V2 behavior.
Task 2: Spot-check for legacy V1 usage in scripts
cr0x@server:~$ command -v docker-compose || echo "docker-compose not found"
docker-compose not found
What it means: Scripts calling docker-compose will fail here; on other machines they might still succeed. That inconsistency is a migration hazard.
Decision: Standardize on docker compose in automation, or explicitly install/pin V1 where required (rarely justified now).
Task 3: Capture Docker Engine version (behavior varies)
cr0x@server:~$ docker version --format '{{.Server.Version}}'
26.1.4
What it means: Engine features (network driver options, build behavior, iptables integration) vary by version.
Decision: If prod and CI differ materially, either align versions or test on the oldest-supported version.
Task 4: Render the fully resolved Compose config (the baseline)
cr0x@server:~$ docker compose -f docker-compose.yml config
name: app
services:
api:
environment:
NODE_ENV: production
image: registry.local/app/api:1.9.2
networks:
default: null
ports:
- mode: ingress
target: 8080
published: "8080"
protocol: tcp
networks:
default:
name: app_default
What it means: This is what Compose will actually apply. It includes defaults, interpolated values, and normalized fields.
Decision: Save this output as your “known good” reference before editing.
Task 5: Validate that your file parses without the version key
cr0x@server:~$ cp docker-compose.yml compose.yaml
cr0x@server:~$ sed -i '/^version:/d' compose.yaml
cr0x@server:~$ docker compose -f compose.yaml config >/dev/null && echo "config ok"
config ok
What it means: Syntax is valid under the Compose Spec model.
Decision: Proceed to semantic comparison (next task). Passing parse checks is necessary, not sufficient.
Task 6: Compare rendered configs before vs after (semantic drift detector)
cr0x@server:~$ docker compose -f docker-compose.yml config > /tmp/before.yaml
cr0x@server:~$ docker compose -f compose.yaml config > /tmp/after.yaml
cr0x@server:~$ diff -u /tmp/before.yaml /tmp/after.yaml | head
What it means: No output means no diff in rendered config (ideal). If there is a diff, read it like a change review: environment, mounts, networks, and labels are the big risk areas.
Decision: If diffs exist, either fix the file or document why the change is expected and safe.
Task 7: Check whether profiles are silently changing what starts
cr0x@server:~$ docker compose -f compose.yaml config --profiles
default
debug
What it means: Profiles exist. Running docker compose up without --profile might skip some services. That can look like “Compose broke” when it’s actually “you didn’t activate the profile.”
Decision: In production scripts, be explicit about profiles (or avoid them in prod files).
Task 8: Verify the project name (volume/network naming stability)
cr0x@server:~$ docker compose -f compose.yaml ls
NAME STATUS CONFIG FILES
app running(3) /srv/app/compose.yaml
What it means: Compose uses a “project name” to namespace networks/volumes/containers. Changing directory name, file name, or using -p can change it.
Decision: Pin the project name in automation (use -p app or name: in the config) if stability matters.
Task 9: Inspect volumes to ensure data isn’t about to disappear
cr0x@server:~$ docker volume ls
DRIVER VOLUME NAME
local app_dbdata
local app_redisdata
What it means: These volumes are separate objects, not tied to container lifecycle. They survive container recreation, but they can be replaced if you accidentally rename them (often via project-name changes).
Decision: If the volume names are project-prefixed and you’re about to change project naming, explicitly set volume names to stable identifiers.
Task 10: Confirm what’s mounted where (permission and SELinux tripwires)
cr0x@server:~$ docker inspect app-db-1 --format '{{json .Mounts}}'
[{"Type":"volume","Name":"app_dbdata","Source":"/var/lib/docker/volumes/app_dbdata/_data","Destination":"/var/lib/postgresql/data","Driver":"local","Mode":"z","RW":true,"Propagation":""}]
What it means: The database writes to a Docker-managed volume. Mode z suggests SELinux relabeling (common on Fedora/RHEL-derived hosts).
Decision: If you move to bind mounts, you must handle ownership and SELinux labels explicitly. If you stay with named volumes, keep the names stable.
Task 11: Detect container recreation risk before applying changes
cr0x@server:~$ docker compose -f compose.yaml up -d --no-deps --dry-run
service api: would recreate
What it means: Compose predicts it will recreate containers. Recreation is where you discover you forgot to persist data or pinned the wrong image tag.
Decision: If critical services would recreate, schedule a window, confirm volume mounts, and ensure you can roll back.
Task 12: Confirm images are pinned in a sane way
cr0x@server:~$ docker compose -f compose.yaml images
CONTAINER REPOSITORY TAG IMAGE ID SIZE
app-api-1 registry.local/app/api 1.9.2 1a2b3c4d5e6f 312MB
What it means: You’re using a specific tag. If you use latest, modernization is a great time for surprise upgrades.
Decision: Pin tags, ideally pin digests for production if your registry workflow supports it.
Task 13: Validate healthchecks actually run and report what you expect
cr0x@server:~$ docker ps --format 'table {{.Names}}\t{{.Status}}' | head -n 5
NAMES STATUS
app-api-1 Up 2 minutes (healthy)
app-db-1 Up 2 minutes (healthy)
What it means: Healthchecks are active and reporting. If services are “Up” but never “healthy,” your dependency orchestration assumptions may be wrong.
Decision: If your startup sequencing depends on health, ensure healthchecks exist and behave consistently across environments.
Task 14: Confirm the effective network and DNS behavior
cr0x@server:~$ docker network inspect app_default --format '{{json .IPAM.Config}}'
[{"Subnet":"172.23.0.0/16","Gateway":"172.23.0.1"}]
What it means: Compose created a default bridge network. If you change project name or declare networks differently, the network name/subnet may change and break hardcoded allowlists.
Decision: If anything external depends on stable subnets/names, declare the network explicitly with a stable name (and avoid hardcoding subnets unless you must).
Task 15: Confirm env var interpolation and missing vars (quiet foot-gun)
cr0x@server:~$ docker compose -f compose.yaml config 2>&1 | grep -i warning | head
WARN[0000] The "API_KEY" variable is not set. Defaulting to a blank string.
What it means: Compose substituted an empty string. Your service might start and then fail in a way that looks unrelated.
Decision: Fail fast: use required env vars in CI, check them into a secrets manager, or provide an .env with safe defaults only for dev.
Task 16: Identify orphaned containers after refactors
cr0x@server:~$ docker compose -f compose.yaml up -d
[+] Running 3/3
✔ Container app-api-1 Started
✔ Container app-db-1 Started
✔ Container app-redis-1 Started
cr0x@server:~$ docker compose -f compose.yaml ps --all
NAME IMAGE COMMAND SERVICE STATUS
app-api-1 registry.local/app/api:1.9.2 "node server.js" api running
app-db-1 postgres:16 "docker-entrypoint..." db running
app-redis-1 redis:7 "docker-entrypoint..." redis running
What it means: Clean list. If Compose prints “orphan containers,” you likely renamed services or changed project name and left old containers behind.
Decision: If you see orphans, remove them deliberately (docker compose down --remove-orphans) after confirming they aren’t still receiving traffic.
Fast diagnosis playbook
When Compose modernization goes sideways, you need a short path to the bottleneck. Here’s the order that finds the culprit fastest in real systems.
First: identify which toolchain you’re actually running
- Check
docker compose versionanddocker version. - Confirm the Compose file(s) used (explicit
-fin automation beats “whatever is in the directory”). - Render config:
docker compose config. If you can’t render, nothing else matters.
Second: detect naming drift (project, networks, volumes)
- Check project:
docker compose ls. - List volumes:
docker volume ls. - Inspect network names:
docker network lsanddocker network inspect.
If names changed, you may have “lost” data simply by creating new volumes under a new namespace.
Third: check storage and permissions before you chase app bugs
- Inspect mounts on the container.
- Check container logs for permission errors.
- On SELinux hosts, look for denied operations after switching mount types.
Fourth: check runtime ordering and health
docker pshealth status and restart counts.- Does the DB become healthy before the API starts?
- If
depends_onassumptions were in play, verify whether they still hold.
Fifth: only then do performance tuning
If the system is slow after modernization, measure before optimizing: check filesystem latency, overlay2 behavior, CPU throttling, and DNS resolution inside the network. Compose wasn’t the bottleneck until you proved it.
Common mistakes: symptom → root cause → fix
1) Symptom: “My database is empty after the migration”
Root cause: Project name changed, so the named volume became a different object (e.g., oldproj_dbdata vs newproj_dbdata). Or you switched from named volume to bind mount without migrating data.
Fix: Explicitly name the volume and keep it stable. Verify volume mapping with docker inspect. If you need to migrate, copy data from the old volume path to the new location during a controlled maintenance window.
2) Symptom: “Services can’t reach each other; DNS lookup fails”
Root cause: You removed links and assumed it created connectivity. In modern Compose, connectivity comes from shared networks; DNS names come from service names.
Fix: Put both services on the same user-defined network (or default) and use service-name DNS. If you need cross-project connectivity, declare an external network and attach both projects to it.
3) Symptom: “The warning is gone, but now CI behaves differently than my laptop”
Root cause: CI uses a different Compose plugin version or Docker Engine version. The config is technically valid but semantics differ (build cache, pull behavior, healthcheck timing, DNS quirks).
Fix: Pin toolchain versions in CI runner images. Add docker compose version to build logs. Treat “toolchain drift” as a change requiring review.
4) Symptom: “Containers recreate every time even when nothing changed”
Root cause: Non-determinism in env vars, build args, or generated config. Or you changed labels and Compose thinks the service definition differs.
Fix: Normalize env var sources. Render config and diff it between runs. Avoid injecting timestamps or “random” values into environment.
5) Symptom: “We used depends_on to wait for the DB, now the app crashes at startup”
Root cause: depends_on only controls start order; it does not guarantee readiness. Older tooling gave teams a false sense of safety when combined with conditions or fragile timing.
Fix: Implement real healthchecks and application-level retry/backoff. Consider a small wait-for script only as a guardrail, not as architecture.
6) Symptom: “Ports changed / traffic started hitting the wrong container”
Root cause: Changing project name or service name changed container names and labels used by a reverse proxy or monitoring agent. Sometimes container_name removal exposes the dependency.
Fix: Stop keying routing/monitoring off container names. Use labels intentionally (e.g., for a proxy) and keep them stable across refactors.
7) Symptom: “Permission denied on bind mounts after modernization”
Root cause: You moved from named volumes (Docker-managed permissions) to bind mounts (host permissions) without matching UID/GID or SELinux labels.
Fix: Align container user IDs, set ownership on the host path, and use correct SELinux mount flags where applicable. Validate with container logs and mount inspection.
Three corporate mini-stories from the Compose trenches
Mini-story 1: An incident caused by a wrong assumption
The company had a “simple” Compose stack: API, Postgres, Redis, and a sidecar that shipped logs. The docker-compose.yml started with version: "3.7". The team saw the warning after upgrading Docker Desktop and decided to modernize by removing the version line and renaming the file to compose.yaml. It felt like housekeeping.
They deployed to a small production VM using a copied directory. They didn’t specify -p, and the directory name on the VM differed from staging. Compose happily created a new project name based on the directory, and with it a new set of networks and volumes. Postgres came up fast. It was also empty. The API connected, ran migrations against the fresh database, and started serving. Nobody noticed until a customer called about missing data, because the new environment looked perfectly healthy.
Monitoring didn’t alert. CPU was fine. Latency was fine. The system was “up.” It was just pointed at the wrong bytes. Eventually someone ran docker volume ls and saw two sets of volumes with similar names. The old data was still there, quietly attached to the old project.
The fix was boring: they pinned the project name, explicitly named volumes, and wrote a runbook step to verify volume attachments before any Compose file refactor. They also learned the hard way that “Compose will reuse my data” is not a guarantee; it’s a side effect of stable naming.
Mini-story 2: An optimization that backfired
A different org had a mandate: reduce deployment time. Their Compose stack built three images locally on the host with docker compose build and then started them. Someone proposed a speedup: “Let’s switch to bind mounts for everything so we don’t rebuild as often.” They also removed the version key while touching the file, because the warning was annoying in logs.
In development, it felt great. Edits were instant. In production, it was a slow-motion failure. The containers ran as non-root (good), but the host directories were owned by a different UID/GID (expected, but not handled). The services started, wrote some files, then crashed when they hit directories they couldn’t create. Restart loops followed. The team tried to patch it by running containers as root, which solved the immediate permission errors and created the next problem: now the host paths were littered with root-owned files, breaking future non-root deployments.
Then came the storage performance surprise. The host was on a filesystem tuned for durability, not small-file churn. The new bind-mounted patterns increased metadata operations; latency spiked during peak write periods. The “optimization” saved build time and spent it back tenfold in I/O wait and operational chaos.
They recovered by reverting production to named volumes for stateful paths, keeping bind mounts only for a small set of read-only config and static assets. They also formalized a rule: production changes that touch storage semantics require an explicit data-path review, not just a code review.
Mini-story 3: A boring but correct practice that saved the day
At a finance-adjacent company, a team decided to modernize their Compose files across dozens of repos. They did it like adults: a single standard migration checklist, plus a CI job that rendered Compose config and stored it as an artifact. Every pull request included a diff of docker compose config output before and after. It wasn’t glamorous, but it was visible.
During one migration, the diff showed a subtle change: a network name shifted because someone introduced a name: field at the top-level while also changing the directory name in the repo. The developer intended to improve consistency. The rendered config revealed that existing containers would be recreated onto a new network. The reverse proxy, which discovered targets by network membership, would stop routing to the service during rollout.
Because they caught it in CI, they planned the change: pre-created the target network, updated proxy config, and rolled in a maintenance window. The migration went out with a clean cutover and no surprise traffic blackhole. Nobody celebrated. That’s the point.
They kept the practice afterward: render-and-diff became part of the definition of done for any Compose change. It saved them again later when an environment variable defaulted to blank in one environment; the rendered config artifacts made the missing value obvious.
Checklists / step-by-step plan
Checklist A: Safe modernization in a single repo
- Inventory toolchain: record Engine + Compose plugin versions on dev, CI, and prod.
- Render current config and store as a baseline artifact (
docker compose config). - Remove
version:and standardize oncompose.yaml. - Render again and diff outputs; resolve any unexpected changes.
- Pin project naming if volumes/networks must be stable (
-porname:). - Audit volumes: ensure stateful services use explicitly named volumes or deliberate bind mounts.
- Confirm env vars: no warnings about missing required values; fail CI if there are.
- Dry-run recreate to see what would be replaced and plan downtime if needed.
- Roll to staging with production-like data paths; verify mounts, health, and connectivity.
- Roll to prod with a rollback plan: old file, old project name, and known volume attachments.
Checklist B: Standardization across many repos (the corporate reality)
- Pick a supported Compose plugin version range and standardize CI runners first.
- Create a migration CI job template that runs
docker compose configand stores artifacts. - Introduce policy checks: reject
latesttags, detect missing env vars, flagcontainer_namesprawl. - Define storage rules: which services can use bind mounts, which must use named volumes, how backups work.
- Make project naming explicit where data persistence is involved.
- Train people once: a short internal guide beats tribal knowledge.
Minimal example: modern Compose Spec style (no version)
Not a full tutorial, just a pattern that avoids common traps: stable volume names, explicit networks, and predictable naming.
cr0x@server:~$ cat compose.yaml
name: app
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?set POSTGRES_PASSWORD}
volumes:
- dbdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 20
api:
image: registry.local/app/api:1.9.2
depends_on:
- db
environment:
DATABASE_URL: postgres://postgres:${POSTGRES_PASSWORD}@db:5432/app
ports:
- "8080:8080"
volumes:
dbdata:
name: app_dbdata
The important bits aren’t fashionable. They’re stable.
FAQ
1) Should I remove the version key?
Yes, if you’re using Compose V2. It’s obsolete in the sense that it no longer controls parsing/feature mode. Remove it, then validate via rendered config diffs.
2) Will removing version change behavior?
Often no. Sometimes yes, but usually indirectly: modernization triggers file renames, project name changes, or updates to the Compose plugin. That’s where behavior changes hide. Prove it with docker compose config before and after.
3) Why does Compose still accept version if it’s obsolete?
Backwards compatibility. Compose knows the world has a decade of files in repos. Accepting the key while warning is the compromise between strictness and practicality.
4) What’s the difference between Compose file versions 2 and 3 anyway?
Historically: 2.x focused on single-host Compose behavior; 3.x aligned with Swarm deployment fields. The deploy section in 3.x misled many teams because classic Compose ignored most of it. The Compose Spec tries to unify this under one model.
5) Do I need to rename docker-compose.yml to compose.yaml?
You don’t have to, but standardizing helps. The risk is not the filename; it’s accidental project name drift and humans guessing which file was used. If you rename, pin the project name and verify volumes.
6) How do I make sure volumes don’t get recreated?
Explicitly name critical volumes (e.g., name: app_dbdata) and keep project naming stable. Before deploying, list volumes and inspect mounts to confirm the container attaches to the expected volume.
7) Our stack uses depends_on. Is it reliable?
It’s reliable for start order, not readiness. If your app needs the DB to be ready, implement retries and healthchecks. Treat depends_on as convenience, not a correctness mechanism.
8) Why does CI warn about missing env vars but production doesn’t?
Because CI and prod load env differently: .env files, shell environment, secrets injection, or CI variables. Render config in both places and compare. Missing values defaulting to blank strings are a classic “it started but it’s broken” failure mode.
9) Can I mix multiple Compose files during migration?
Yes. Use -f layering (base + override). But remember: the rendered model is what matters. Always validate with docker compose -f base -f override config and store that output.
10) Should we pin Docker and Compose versions?
Pin in CI. In production, you can either pin or run a controlled upgrade cadence. What you must not do is “whatever version the VM image shipped with.” That’s not a strategy; it’s a surprise generator.
Next steps you can do today
The “version is obsolete” warning is a cheap alarm bell. Take the free signal, fix the file, and use the moment to reduce future drift.
- Run
docker compose configon the current file and save the output. - Remove
version:, re-render config, and diff it. No diff is the goal; explain any diff you keep. - Pin project naming if you have stateful volumes or external dependencies on network names.
- Audit volumes and mounts like it’s production data (because it is).
- Standardize CI on a known Compose plugin version and log it in every pipeline run.
Warnings are only annoying when you ignore them. If you treat them as a prompt to verify behavior, they’re one of the few gifts production systems give you for free.