The fastest way to ruin your afternoon is to type docker compose up -d while your shell is pointed at the wrong daemon.
You think you’re restarting a dev stack; production quietly agrees and does exactly what you asked.
Docker contexts exist to stop that. They give you named, inspectable endpoints with explicit switching, plus enough metadata to make
“where am I deploying?” a question you can answer before anything catches fire.
Why contexts exist (and why DOCKER_HOST is a foot-gun)
If you’ve been around Docker long enough, you’ve probably used DOCKER_HOST or sprinkled
docker -H tcp://... across scripts like seasoning. It works. It also creates a state you can’t see, can’t audit,
and can’t easily version-control. Your terminal becomes a liability.
Docker contexts fix this by making “target daemon selection” a first-class concept: you name it, you can list it, you can inspect it,
you can export/import it, and you can force your tooling to use it. That last part matters in incident response:
you want fewer implicit globals, not more.
Contexts also unify more than one endpoint type. In practice you’ll see:
- Local (default) to your workstation’s Docker Desktop or Linux daemon socket.
- SSH to a remote host without opening Docker’s TCP API to the internet.
- TCP/TLS for legacy setups or controlled networks (be careful, more on that later).
- Orchestrator metadata (historically for Swarm/Kubernetes integrations; today, mostly for endpoint bookkeeping).
A context is not “just convenience.” It’s a safety rail. It’s also a tool for accountability:
you can build workflows where “prod” is a thing you must explicitly select, not an accident you stumble into.
Joke #1: The only thing more permanent than a temporary workaround is a Docker CLI pointed at production.
Interesting facts and quick history
- Docker’s remote API predates contexts: early remote control was mostly
DOCKER_HOSTand the TCP API, which made “oops” deployments common. - Contexts became mainstream with Docker CLI v19.03-era workflows, when teams started treating multiple environments as routine, not exceptional.
- SSH contexts piggyback on the OpenSSH client: you get agent forwarding behavior, known_hosts checking, and your existing SSH config for free (and its sharp edges too).
- Docker Desktop uses a context-like abstraction internally to route commands to its VM-backed daemon; contexts make that concept explicit for you.
- Swarm was an early driver for “multiple endpoints, one CLI,” even if many shops moved to Kubernetes or managed container platforms later.
- The daemon is root-like power: a user who can talk to the Docker socket can typically become root on that host. Contexts don’t change that; they just help you point that power deliberately.
- Contexts are stored client-side (in your Docker config directory), which is why exporting/importing matters for laptops, ephemeral CI runners, and break-glass jump boxes.
- Compose respects contexts:
docker composetalks to whatever context the CLI is configured for, so “wrong context” affects your whole workflow.
Mental model: what a context really is
Think of a Docker context as a named tuple:
(endpoint, auth, TLS/SSH plumbing, metadata).
It is not the engine. It’s not a cluster. It’s not an environment variable you forget about.
It is a file-backed configuration object that the CLI consults before it does anything.
What’s inside
When you inspect a context, you’ll typically see:
- Endpoints: usually
dockerwith aHostsuch asunix:///var/run/docker.sockorssh://user@host. - TLS material (if using TCP/TLS): certificates and verification flags.
- Orchestrator: often
swarmorkubernetesfields depending on era; in many modern setups it’s present but unused. - Description: which you should actually fill in, because humans need guardrails too.
Precedence: who wins?
This matters because production outages love precedence bugs. Roughly:
docker --context X ...wins for that invocation.DOCKER_CONTEXTcan override the “current” context.docker context use Xsets the current context for future commands.DOCKER_HOSTcan bypass contexts entirely for some tools or scripts, depending on how they’re written.
Opinionated take: ban DOCKER_HOST in interactive shells. Allow it only in tightly controlled automation,
and even there, prefer --context for clarity.
One reliable ops quote
“Hope is not a strategy.” — Gene Kranz (paraphrased idea)
Contexts are how you stop “hoping” you’re talking to the right daemon.
Core commands that matter in production
You don’t need to memorize 50 subcommands. You need a handful you can run under pressure,
and you need to understand what the output implies.
Golden rule
Before any destructive command (rm, down, system prune, volume rm, “let’s just redeploy”),
run a context check. Make it a tic.
Names you should adopt
dev,staging,prod(boring, good)prod-eu-west,prod-us-eastif you actually operate in multiple regionsbuildkitorbuilderif you use remote builders
Avoid clever names. Your future self in a pager rotation is not impressed.
Practical tasks: 12+ things you will actually do
Below are real operator tasks with commands, example output, what it means, and the decision you make from it.
Treat these like your muscle-memory drills.
Task 1: List contexts and identify the current one
cr0x@server:~$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT ERROR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock
staging Staging host via SSH ssh://deploy@stg-01
prod Production host via SSH ssh://deploy@prod-01
What the output means: the asterisk marks the active context. The “DOCKER ENDPOINT” tells you how the CLI will reach the daemon.
Decision: if you’re about to touch production, you should see prod * and nothing else. If you don’t, stop.
Task 2: Inspect a context before using it
cr0x@server:~$ docker context inspect prod
[
{
"Name": "prod",
"Metadata": {},
"Endpoints": {
"docker": {
"Host": "ssh://deploy@prod-01",
"SkipTLSVerify": false
}
},
"TLSMaterial": {},
"Storage": {
"MetadataPath": "/home/cr0x/.docker/contexts/meta/7c1a3c2b6f",
"TLSPath": "/home/cr0x/.docker/contexts/tls/7c1a3c2b6f"
}
}
]
What the output means: this is an SSH context; TLS material is empty because SSH provides the transport security.
Decision: confirm the user/host pair is correct and matches your SSH config expectations.
If it says root@prod-01, you’re holding a loaded weapon.
Task 3: Switch contexts explicitly
cr0x@server:~$ docker context use staging
staging
cr0x@server:~$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT ERROR
default Current DOCKER_HOST based configuration unix:///var/run/docker.sock
staging * Staging host via SSH ssh://deploy@stg-01
prod Production host via SSH ssh://deploy@prod-01
What the output means: your shell is now pointed at staging for all future Docker commands.
Decision: in shared terminals (tmux, screen, jump hosts), announce your context in the prompt or status line. Silence is how mistakes breed.
Task 4: Use a one-off context without changing your session
cr0x@server:~$ docker --context prod ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a12b3c4d5e6f nginx:1.25 "/docker-entrypoint.…" 2 days ago Up 2 days 0.0.0.0:80->80/tcp web
What the output means: only this command targeted prod; your current context remains unchanged.
Decision: prefer this pattern in automation and in copy/paste runbooks.
It’s harder to “forget where you are.”
Task 5: Create a context via SSH (the sane default)
cr0x@server:~$ docker context create prod --docker "host=ssh://deploy@prod-01"
prod
cr0x@server:~$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT ERROR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock
prod Production host via SSH ssh://deploy@prod-01
What the output means: you’ve defined a remote endpoint without exposing the Docker TCP API.
Decision: use SSH contexts unless you have a strong reason not to. “Because it worked in 2017” is not a strong reason.
Task 6: Validate you’re talking to the right engine (cheap identity check)
cr0x@server:~$ docker --context prod info --format 'Name={{.Name}} Server={{.ServerVersion}} RootDir={{.DockerRootDir}}'
Name=prod-01 Server=25.0.3 RootDir=/var/lib/docker
What the output means: you’ve pulled three identity signals: daemon name, version, and root directory.
Decision: bake this into your preflight checks. If the name is wrong, you’re on the wrong host. If the version is unexpected, plan for behavior changes.
Task 7: Detect “context works but SSH is lying to you” (host key / config mismatch)
cr0x@server:~$ docker --context prod ps
error during connect: Get "http://docker.example/v1.45/containers/json": command [ssh -o ConnectTimeout=30 -T -l deploy prod-01 docker system dial-stdio] exit status 255
What the output means: the CLI tried to run docker system dial-stdio over SSH and failed. This is transport, not Docker.
Decision: run a plain SSH command next to isolate: if SSH fails, fix SSH (keys, bastion, host key).
If SSH works but Docker fails, fix daemon permissions.
Task 8: Prove basic SSH connectivity (without Docker in the middle)
cr0x@server:~$ ssh -o BatchMode=yes deploy@prod-01 'hostname; id; docker version --format "{{.Server.Version}}"'
prod-01
uid=1001(deploy) gid=1001(deploy) groups=1001(deploy),998(docker)
25.0.3
What the output means: the user can run docker and is in the docker group. That’s effectively root power on this box.
Decision: treat membership in docker group like sudo. Grant it deliberately, audit it, and remove it when not needed.
Task 9: Export a context for a break-glass laptop or CI runner
cr0x@server:~$ docker context export prod -o prod.dockercontext
cr0x@server:~$ ls -l prod.dockercontext
-rw------- 1 cr0x cr0x 2840 Jan 3 10:12 prod.dockercontext
What the output means: you produced a portable context bundle.
Decision: store it like a secret. If it includes TLS material, it’s literally access. Even with SSH-only endpoints, it’s still operational metadata you don’t want floating around.
Task 10: Import a context on another machine
cr0x@server:~$ docker context import prod -i prod.dockercontext
prod
cr0x@server:~$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT ERROR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock
prod Production host via SSH ssh://deploy@prod-01
What the output means: your CLI now knows how to reach production the same way as the source machine did.
Decision: for CI, prefer generating contexts at runtime (with ephemeral credentials) rather than importing long-lived ones.
But for incident response, importing a known-good context is gold.
Task 11: Remove a context you no longer trust
cr0x@server:~$ docker context rm staging
staging
cr0x@server:~$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT ERROR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock
prod Production host via SSH ssh://deploy@prod-01
What the output means: context metadata is gone from the client.
Decision: remove contexts when hosts are decommissioned, moved, or rebuilt. Stale contexts are how you end up deploying into a recycled hostname.
Task 12: Compare two environments quickly (images, containers, disk)
cr0x@server:~$ docker --context staging system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 18 6 5.2GB 2.7GB (51%)
Containers 9 6 210MB 0B (0%)
Local Volumes 14 9 22.4GB 3.1GB (13%)
Build Cache 0 0 0B 0B
What the output means: you get a quick sense of storage pressure and whether cleanup is worth it.
Decision: if volumes dominate, don’t knee-jerk system prune. Your data is probably in volumes. Plan targeted cleanup.
Task 13: Run the same check on prod without switching contexts
cr0x@server:~$ docker --context prod system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 42 18 14.8GB 1.1GB (7%)
Containers 31 27 1.8GB 0B (0%)
Local Volumes 66 55 640.2GB 12.4GB (1%)
Build Cache 0 0 0B 0B
What the output means: production storage is volume-heavy and barely reclaimable. That’s normal for stateful services, and also a reminder that disk is a capacity plan, not a cleanup button.
Decision: if disk alerts are firing, pruning won’t save you. You need volume lifecycle discipline, retention policies, or larger disks.
Task 14: Verify a Compose deployment targets the intended host
cr0x@server:~$ docker --context prod compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
app-web-1 registry/app:web "/entrypoint.sh" web 3 hours ago Up 3 hours 0.0.0.0:443->443/tcp
app-db-1 postgres:16 "docker-entrypoint.s…" db 3 hours ago Up 3 hours 5432/tcp
What the output means: Compose is talking to prod. The running services are listed from that daemon.
Decision: when you’re unsure where Compose is pointing, don’t guess—ask it with compose ps using --context.
Task 15: Diagnose “wrong context” from the prompt with an explicit marker
cr0x@server:~$ docker context show
prod
What the output means: the CLI prints the active context name, nothing more.
Decision: wire this into your shell prompt (PS1) or your tmux status bar. You want the context visible like your current git branch.
Task 16: Check event stream on the remote daemon (great during incidents)
cr0x@server:~$ docker --context prod events --since 10m
2026-01-03T10:02:21.223456789Z container die a12b3c4d5e6f (exitCode=137, image=registry/app:web, name=app-web-1)
2026-01-03T10:02:24.019283746Z container start 9f8e7d6c5b4a (image=registry/app:web, name=app-web-1)
What the output means: you’re seeing container lifecycle churn; exit 137 usually implies SIGKILL, commonly OOM-kill or manual kill.
Decision: if you see repeated dies/starts, stop redeploying. Look at memory limits, kernel OOM logs, or application crashes.
Joke #2: “It’s just a quick restart” is the ops equivalent of “I’ll just have one cookie.”
Security and access control: keep the daemon on a leash
Docker contexts don’t magically secure anything. They make it easier to use secure transports and harder to accidentally do dumb things.
The daemon remains a high-privilege control plane. Treat it like SSH+sudo combined.
SSH contexts: the default you want
With ssh:// contexts, Docker runs a helper command on the remote host to establish a tunneled connection.
This gives you:
- SSH key-based authentication and your usual enterprise guardrails (bastions, MFA wrappers, session recording).
- Host key verification (if you didn’t disable it like a monster).
- No need to expose Docker’s TCP API port.
TCP contexts: use only with TLS and only when justified
If you have a legacy environment that requires tcp://host:2376 with TLS certs, you can represent it as a context.
But remember: Docker’s unauthenticated TCP API is basically “root on the network.”
Don’t do tcp://0.0.0.0:2375 unless your threat model is “we enjoy chaos.”
Least privilege (as much as Docker allows)
On classic Docker Engine, anyone who can access the Docker API can usually mount the filesystem, run privileged containers,
and escalate. Your real controls are:
- Who can SSH to the host (and from where).
- Which user account is used for contexts (separate “deploy” identities are healthier than personal accounts).
- Auditability of changes (events, system logs, and deployment pipelines).
- Host hardening: AppArmor/SELinux, rootless mode where feasible, and carefully chosen daemon settings.
Operational hygiene that pays off
- Use separate contexts for separate environments. Do not reuse “prod” for multiple hosts.
- Put the environment name in the context description and in the remote host banner (motd) if you can.
- Make context switching explicit in runbooks and scripts: use
--context. - Log “what context was used” in CI/CD output. It’s banal and it helps during blame-free archaeology.
Three corporate mini-stories from the trenches
Incident caused by a wrong assumption: “default” meant “dev”
A team had a jump host that “everyone used” for container operations. It was maintained by the platform group, and it had Docker installed.
The assumption—never stated, always implied—was that the jump host’s Docker was just a convenience for tooling. People used it to run linters, pull images, maybe test Compose files.
One engineer got paged for a staging issue. They SSH’d into the jump host, ran docker ps, saw a bunch of containers, and concluded staging was healthy.
Then they ran a cleanup: docker system prune -af. The command completed quickly. Too quickly.
The jump host’s Docker daemon wasn’t “local dev.” It was configured—via an old, exported DOCKER_HOST in /etc/profile.d—to point at a shared production engine.
The containers they saw were production containers. The prune deleted images and build cache on production, which triggered a cascade of image pulls and restarts during peak traffic.
The postmortem wasn’t about “who typed the command.” It was about invisible state and brittle assumptions.
The fix was simple and slightly humiliating: remove global DOCKER_HOST, define contexts, and update the jump host prompt to display docker context show.
They also put a small wrapper in place: destructive commands required --context explicitly.
It didn’t prevent all mistakes, but it stopped this entire class of “I thought I was on dev” failures.
Optimization that backfired: remote TCP endpoint “for speed”
Another org wanted faster CI builds. Someone noticed that SSH contexts add overhead: establishing SSH sessions, some latency, occasional weirdness with bastions.
So they “optimized” by exposing the Docker daemon over TCP inside the internal network. It was behind a firewall, and they promised to add TLS later.
It worked. Builds were faster. The team celebrated by increasing parallel jobs, which increased Docker API load, which increased daemon CPU, which slowed everything again.
Meanwhile, “TLS later” quietly turned into “TLS never,” because the pipeline was already dependent on the insecure endpoint.
The real backfire came months later during a network segmentation project.
Firewall rules changed, and suddenly a subset of runners could still reach the Docker API while others couldn’t. Builds became flaky, and retries overloaded the daemon further.
Engineers spent days debugging “Docker is unreliable” when the issue was “you built a central remote daemon with a fragile network dependency.”
The eventual fix wasn’t glamorous: bring back SSH contexts for ad-hoc access, and move CI to per-runner builders or a properly managed build cluster.
Contexts were still used, but the endpoint strategy changed. The optimization had targeted the wrong bottleneck.
The lesson: shaving milliseconds off a control channel is rarely the real performance problem. You usually want locality, caching, and capacity—not a wide-open daemon port.
Boring but correct practice that saved the day: explicit contexts everywhere
A more mature team had a rule: every runbook command included --context. Every single one.
It was mildly annoying. People complained. Then they stopped noticing it, like seatbelts.
One night, they had a production incident involving a runaway log volume that filled disk.
The on-call engineer had staging open in a terminal tab and production in another. Under stress, that’s a recipe for cross-environment mistakes.
They followed the runbook. Every command was pinned: docker --context prod ....
They inspected volume usage, stopped the correct noisy container, and restarted it with a logging limit they’d already validated in staging.
The best part: during the post-incident review, they had clean terminal logs and CI logs that showed exactly what context was used for each action.
No forensic guessing, no “I’m pretty sure I was in prod.” The record was clear.
Boring practices don’t get conference talks. They also keep you employed.
Fast diagnosis playbook (find the bottleneck fast)
When “Docker is slow” or “the remote context is flaky,” don’t randomly change settings. Triage like an adult:
identify whether the bottleneck is transport, daemon health, registry/image pulls,
storage, or your own client machine.
First: confirm you’re debugging the right target
cr0x@server:~$ docker context show
prod
Decision: if the context isn’t the environment you think it is, stop. Fix that first. Everything else is noise.
Second: measure basic control-plane latency
cr0x@server:~$ time docker --context prod version
Client: Docker Engine - Community
Version: 25.0.3
API version: 1.45
...
real 0m0.612s
user 0m0.078s
sys 0m0.021s
What it means: sub-second is fine. Multiple seconds suggests SSH/bastion/DNS issues or a saturated daemon.
Decision: if this is slow, don’t start with “Docker storage tuning.” Start with network/SSH and daemon load.
Third: check daemon health and resource pressure
cr0x@server:~$ docker --context prod info | sed -n '1,35p'
Client:
Version: 25.0.3
Context: prod
Debug Mode: false
Server:
Containers: 31
Running: 27
Paused: 0
Stopped: 4
Images: 42
Server Version: 25.0.3
Storage Driver: overlay2
Logging Driver: json-file
Cgroup Driver: systemd
...
Decision: confirm storage driver, logging driver, and general container/image counts. If these differ across environments, behavior will differ too.
Fourth: identify whether storage is the real culprit
cr0x@server:~$ docker --context prod system df -v | sed -n '1,40p'
Images space usage:
REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS
registry/app web 2aa1b3c4d5e6 3 hours ago 412.5MB 0B 412.5MB 4
...
Decision: if image churn is high, focus on pull speed, registry performance, and caching.
If volumes dominate, focus on lifecycle/retention and filesystem performance.
Fifth: observe container churn and restarts
cr0x@server:~$ docker --context prod ps --format 'table {{.Names}}\t{{.Status}}\t{{.RunningFor}}'
NAMES STATUS RUNNING FOR
app-web-1 Up 3 hours (healthy) 3 hours
app-db-1 Up 3 hours 3 hours
worker-1 Restarting (1) 5 seconds 8 minutes
Decision: restart loops will make everything look “slow” because the daemon is constantly creating/destroying resources.
Fix the crashing container before you tune anything else.
Sixth: if SSH contexts fail intermittently, isolate SSH
cr0x@server:~$ ssh -o ServerAliveInterval=5 -o ServerAliveCountMax=2 deploy@prod-01 'echo ok'
ok
Decision: if this drops, your problem is network path, bastion stability, or SSH timeouts—not Docker contexts.
Common mistakes: symptom → root cause → fix
Mistake 1: “I deployed to the wrong host”
Symptom: staging changes appear in prod, or prod changes appear nowhere you expect.
Root cause: implicit current context, or DOCKER_HOST overriding what you thought was active.
Fix: use docker --context ... in runbooks/scripts; remove global DOCKER_HOST; show context in prompt; enforce explicit selection for destructive actions.
Mistake 2: “docker context use prod” works, but commands hang
Symptom: CLI stalls on ps, info, pull, or compose up.
Root cause: SSH path issue (bastion, DNS, MTU), or remote daemon under CPU/memory pressure.
Fix: time docker --context prod version; test plain SSH; check remote host load; consider SSH keepalives.
Mistake 3: “It says permission denied to the Docker daemon”
Symptom: remote context errors like “permission denied while trying to connect to the Docker daemon socket.”
Root cause: the remote user can SSH but can’t access /var/run/docker.sock (not in docker group, rootless daemon mismatch).
Fix: grant correct group membership (with care), or use rootless Docker endpoints intentionally; validate with ssh host 'docker ps'.
Mistake 4: “My context points to an old host after rebuild”
Symptom: context connects, but you see a fresh empty daemon or unexpected engine name/version.
Root cause: hostname recycled, DNS updated, or a new instance took the name.
Fix: validate identity with docker info --format 'Name=...'; pin SSH config to stable host keys; update contexts after rebuilds.
Mistake 5: “Exported context works on one laptop but not another”
Symptom: imported context exists but fails to connect via SSH.
Root cause: SSH agent/keys differ, SSH config differs, or known_hosts mismatch.
Fix: ensure the importing machine has the right SSH identity and config; test with plain SSH; avoid relying on agent-forwarding in automation.
Mistake 6: “Compose behaves differently than docker”
Symptom: docker ps shows one environment, but docker compose affects another (or vice versa).
Root cause: mixed use of DOCKER_HOST, DOCKER_CONTEXT, and “current context,” or running different shells with different env vars.
Fix: standardize on --context; print env vars in debug shells; avoid exporting DOCKER_HOST in profiles.
Mistake 7: “The context list shows ERROR”
Symptom: docker context ls shows an error column populated for a context.
Root cause: endpoint unreachable or misconfigured; Docker CLI attempted validation.
Fix: inspect the context; test SSH; recreate the context if the endpoint string is wrong; remove stale contexts.
Mistake 8: “We used TCP because it’s internal; now security is angry”
Symptom: audit finding: Docker API exposed without TLS, broad firewall rules, mysterious daemon access.
Root cause: convenience-driven endpoint design; remote API treated like a harmless service.
Fix: move to SSH contexts or TLS-authenticated endpoints; restrict network paths; rotate credentials; treat daemon access as privileged.
Checklists / step-by-step plan
Step-by-step: adopt contexts without breaking everyone
- Inventory endpoints: list where Docker commands currently land (developer laptops, jump hosts, CI runners).
- Remove hidden globals: hunt for
DOCKER_HOSTin shell profiles and CI environment injection. - Create named contexts: at least
dev,staging,prod. Use SSH endpoints. - Add descriptions: include environment, region, and ownership. Humans misread; metadata helps.
- Standardize runbooks: update every command to include
--context. - Make context visible: prompt/tmux integration so you always see where you are.
- Define break-glass procedure: export/import context flow, credential handling, and audit expectations.
- CI/CD discipline: use explicit contexts or explicit endpoints per job; avoid “current context” reliance.
- Rotate access: if you used TCP without TLS historically, consider that compromised and rotate accordingly.
- Train with drills: practice “switch context, verify identity, run command” until it’s automatic.
Operational checklist: before you run something risky
- Run
docker context show. - Run
docker info --format 'Name={{.Name}}'against that context. - Confirm you are in the right terminal tab/window (sounds dumb; saves outages).
- Prefer
docker --context X ...for the risky command. - For cleanup: inspect
docker system dffirst; don’t prune blindly.
Hardening checklist: make it harder to do the wrong thing
- Separate accounts for deploy vs admin; don’t use personal SSH keys for automation.
- Use SSH contexts; avoid exposing Docker TCP API.
- Enforce host key checking; manage known_hosts centrally where appropriate.
- Limit who can reach Docker hosts (network ACLs, bastions, identity-aware proxies).
- Audit membership in the
dockergroup like you audit sudoers.
FAQ
1) Is a Docker context the same as a Kubernetes context?
No. Kubernetes contexts live in kubeconfig and select clusters/namespaces/users for kubectl.
Docker contexts select Docker CLI endpoints (local socket, SSH, TCP/TLS). Similar idea, different universe.
2) Should I use docker context or DOCKER_HOST?
Use contexts for humans and most automation. Reserve DOCKER_HOST for legacy tooling you can’t change,
and keep it tightly scoped per process, not exported in profiles.
3) What’s the safest way to manage production access?
Use SSH contexts with a dedicated deploy account, locked-down network paths (bastion), and audited key management.
Make every prod command explicit via --context prod.
4) Why does Docker over SSH sometimes feel slower?
Because it’s doing more: SSH setup, authentication, and a tunneled stdio connection. If latency is painful,
the real fix is usually better network paths, persistent control masters, or moving build workloads closer to the daemon.
5) Can I store contexts in git?
Don’t commit exported contexts. They can include sensitive TLS material and operational endpoints.
Instead, store the instructions to create contexts, and generate them at runtime.
6) How do I prevent “current context” surprises in scripts?
Use docker --context NAME on every command. Scripts should be deterministic.
If a script depends on your interactive shell state, it’s not a script; it’s a suggestion.
7) Can Docker contexts help with multi-region rollouts?
Yes. Create contexts like prod-eu-west and prod-us-east, then run the same commands against each explicitly.
Just be disciplined about naming and identity checks.
8) Why does docker context ls show an error for one context?
The endpoint is unreachable or misconfigured, or Docker tried and failed a connection test.
Inspect the context, validate SSH manually, and recreate it if the host string is wrong.
9) Are contexts shared between users on the same machine?
Usually no. Contexts are stored in the user’s Docker config directory. That’s good: it avoids surprise cross-user coupling.
It also means you must manage them per account on shared jump boxes.
10) Does using contexts make Docker “safe” for multi-tenant hosts?
Contexts are a client-side selector. Multi-tenancy safety depends on daemon configuration, kernel isolation, and access control.
Contexts help you avoid operator mistakes; they don’t provide tenant isolation.
Next steps you should do this week
If you operate more than one Docker host, you’re already in “multiple targets” land. Act like it.
- Create contexts for each environment using SSH endpoints. Name them plainly.
- Make context visible in your shell prompt or tmux status bar.
- Update runbooks so every command pins
--context. - Remove global
DOCKER_HOSTfrom interactive environments and shared jump hosts. - Add an identity preflight:
docker --context X info --format 'Name={{.Name}} Server={{.ServerVersion}}'. - Audit access: who can reach prod hosts, who is in the docker group, and how keys are managed.
Contexts won’t prevent every failure. They will prevent the dumb ones. Those are the ones that keep showing up at 2 a.m.