If you’ve ever typed docker ps on a server and felt like a responsible adult, here’s a fun twist:
that same power can be reachable from the open internet with one sloppy config change. No exploit chain. No fancy payload.
Just “hello, daemon” and your host becomes someone else’s compute.
The Docker Remote API is not “just another service port.” It’s a root-adjacent control plane that can start privileged containers,
mount the host filesystem, and quietly persist. Treat it like SSH with password auth and a public IP — because functionally it’s worse.
Why the Docker Remote API is basically root
Docker’s daemon (dockerd) is the authority that creates namespaces, sets cgroups, mounts filesystems,
manages container networking, and starts processes on your behalf. When you control the daemon, you control the host.
The Remote API (over a Unix socket or TCP) isn’t a “management interface” like a read-only dashboard. It’s the steering wheel.
Here’s the core problem: the daemon performs privileged actions, and it trusts requests coming from its API endpoint.
If that endpoint is reachable and not strongly authenticated, anyone who can talk to it can:
- Start a container with
--privileged - Bind-mount
/from the host into a container - Write SSH keys into
/root/.ssh/authorized_keys - Install persistence via systemd units, cron, or drop-in services
- Exfiltrate secrets from environment variables, volumes, and image layers
- Pivot into your network using the host’s routing and credentials
This is why “Docker API exposed” incidents often skip the exploit phase and go straight to monetization: cryptomining, botnets,
credential theft, or lateral movement. They don’t need to break in. You opened the door and wrote “admin inside.”
Joke #1: Exposing 2375 to the internet is like putting your house key under the doormat — then live-streaming the doormat.
Facts & historical context you should know
These are short, concrete bits of context that matter because they explain why this problem keeps recurring.
Some are history, some are “how we got here,” all of them show up in postmortems.
- The default Docker daemon listens on a Unix socket (
/var/run/docker.sock), not TCP, specifically to avoid remote exposure by default. - Port 2375 is conventionally “Docker over TCP without TLS”; port 2376 is “Docker over TCP with TLS.” Many scanners look for 2375 first.
- Early Docker tooling normalized remote daemons (especially in CI and “Docker-in-Docker” workflows), and habits stuck even as threat models changed.
- Docker Machine popularized remote Docker endpoints for provisioning hosts; a lot of old blog posts still show insecure patterns copied into modern fleets.
- The Remote API is HTTP-based. If you can reach it, you can talk to it with
curl. That’s convenient for automation and catastrophic for exposed networks. - Docker’s “docker” group is root-equivalent on most systems because it grants access to the daemon socket. This is not subtle; it’s just frequently ignored.
- Attackers industrialized scanning for exposed Docker years ago. This isn’t a niche threat; it’s automated background radiation like SSH brute force.
- Cloud security groups and firewall defaults changed over time. A “temporary” inbound rule sometimes becomes permanent because nobody owns the cleanup.
Threat model: how attackers use an exposed daemon
The common kill chain is embarrassingly simple
When Docker listens on tcp://0.0.0.0:2375 (or a public interface) without TLS client auth,
the attacker workflow is basically:
- Find an IP with 2375 open.
- Call the API to list containers/images.
- Run a container with a bind mount of the host root filesystem.
- Modify host files to persist access.
- Optionally, run a miner in a container and walk away.
Why containers don’t save you from daemon control
Containers are isolation primitives, not magical safety. The isolation boundary is enforced by the kernel, but the entity configuring
that boundary is the daemon. If you can tell the daemon “mount the host root inside this container,” the kernel will comply because
the daemon is allowed to ask.
What about “rootless Docker”?
Rootless Docker reduces blast radius, but it doesn’t make an exposed API harmless. It can still give an attacker the ability to run arbitrary
workloads as that user, access that user’s secrets, and pivot. Rootless helps; it’s not a get-out-of-jail-free card.
A single quote worth remembering
“Hope is not a strategy.” (paraphrased idea, commonly attributed to engineers and military planners)
Treat “we don’t think it’s exposed” as hope.
Fast diagnosis playbook
You’re on-call. Someone says, “Why is this host melting?” Or worse: “Why do we have outbound traffic to weird places?”
Don’t start by arguing about container philosophy. Start by locating control-plane exposure and the actual workload.
First: is the daemon exposed on TCP?
- Check listening ports (
ss/lsof) and Docker service flags. - Check firewall/security group state, not just local config.
- If you see
0.0.0.0:2375or:::2375, treat it as a live incident until proven otherwise.
Second: is there evidence of unauthorized containers or images?
- List running containers; look for miners, random image names, containers with
--privileged, host mounts, or odd network settings. - Inspect recent container creation timestamps.
- Check for new users/ssh keys/systemd units created by containerized processes writing to the host.
Third: contain, then investigate
- Block inbound access to the daemon at the network edge immediately.
- Snapshot evidence (process lists, container metadata, logs) before you wipe anything.
- Rotate credentials that could have been accessed: cloud instance roles, registry creds, app secrets on disk.
Hands-on tasks: detect, confirm, and decide (commands included)
These are practical tasks you can run on a host. Each one includes what the output means and what decision you make from it.
No magic, no “just check your firewall.” You’re the firewall now.
Task 1: Check whether Docker is listening on TCP
cr0x@server:~$ sudo ss -lntp | grep -E ':(2375|2376)\s'
LISTEN 0 4096 0.0.0.0:2375 0.0.0.0:* users:(("dockerd",pid=1024,fd=6))
What it means: dockerd is listening on all IPv4 interfaces on 2375, plaintext.
Decision: treat as critical exposure. Proceed to containment: block at firewall and reconfigure daemon.
Task 2: Confirm Docker daemon configuration via systemd
cr0x@server:~$ systemctl cat docker | sed -n '1,120p'
# /lib/systemd/system/docker.service
[Service]
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375
What it means: The daemon was explicitly started with a TCP listener. This is not accidental kernel behavior; it’s config.
Decision: remove the TCP host unless you are implementing mutual TLS and strict network controls.
Task 3: Check Docker info for configured hosts and security options
cr0x@server:~$ docker info | sed -n '1,80p'
Client:
Context: default
Debug Mode: false
Server:
Containers: 12
Running: 3
Paused: 0
Stopped: 9
Server Version: 26.1.0
Storage Driver: overlay2
Security Options:
apparmor
seccomp
Profile: builtin
What it means: This doesn’t directly tell you the daemon bind addresses, but it shows you the security profile baseline.
Decision: if you later find unauthorized containers, you’ll want to know whether AppArmor/seccomp was in effect (and whether --privileged bypassed it).
Task 4: Identify how the daemon is bound (daemon.json)
cr0x@server:~$ sudo cat /etc/docker/daemon.json
{
"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2375"]
}
What it means: Remote API over plaintext TCP is configured persistently.
Decision: delete the TCP host entry or migrate to TLS on 2376 with client certificate auth.
Task 5: Check firewall state on the host (UFW example)
cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
To Action From
22/tcp ALLOW IN 203.0.113.0/24
2375/tcp ALLOW IN Anywhere
What it means: You “default deny,” but then allowed the one port that can hand over the keys.
Decision: remove this rule immediately unless it’s strictly limited to a management network with TLS.
Task 6: Validate exposure from outside (using a second box)
cr0x@server:~$ curl -s http://198.51.100.10:2375/version
{"Platform":{"Name":"Docker Engine - Community"},"Components":[{"Name":"Engine","Version":"26.1.0","Details":{"ApiVersion":"1.45"}}],"ApiVersion":"1.45","MinAPIVersion":"1.24","GitCommit":"...","GoVersion":"go1.22.2","Os":"linux","Arch":"amd64","KernelVersion":"6.8.0-41-generic","BuildTime":"..."}
What it means: If you can fetch /version unauthenticated, anyone can.
Decision: this is an incident, not a “ticket.” Contain now; investigate next.
Task 7: List containers via the remote API (this should not work anonymously)
cr0x@server:~$ curl -s http://198.51.100.10:2375/containers/json?all=1 | head
[{"Id":"b1c...","Names":["/web-1"],"Image":"nginx:alpine","State":"running","Status":"Up 3 days"},
{"Id":"f9a...","Names":["/xmrig"],"Image":"unknown:latest","State":"running","Status":"Up 2 hours"}]
What it means: Remote unauthenticated enumeration. The presence of a suspicious container name/image is a red flag.
Decision: start evidence collection and containment. Do not “just delete it” until you’ve captured metadata and logs.
Task 8: Inspect a suspicious container for host mounts and privilege
cr0x@server:~$ docker inspect xmrig --format '{{.HostConfig.Privileged}} {{json .Mounts}}'
true [{"Type":"bind","Source":"/","Destination":"/host","Mode":"rw","RW":true,"Propagation":"rprivate"}]
What it means: This container is privileged and has the host root mounted read-write at /host.
Decision: assume host compromise. Plan for rebuild/restore, credential rotation, and full audit.
Task 9: Check for recently created containers (timeline clue)
cr0x@server:~$ docker ps -a --format 'table {{.ID}}\t{{.Image}}\t{{.Names}}\t{{.CreatedAt}}\t{{.Status}}' | head -n 10
CONTAINER ID IMAGE NAMES CREATED AT STATUS
f9a2d1c3ab11 unknown:latest xmrig 2026-01-03 01:12:44 +0000 Up 2 hours
b1c9aa88d220 nginx:alpine web-1 2025-12-30 09:01:02 +0000 Up 3 days
What it means: A new container appeared recently. If nobody can explain it, it’s unauthorized until proven otherwise.
Decision: correlate with logs (dockerd, auditd, cloud flow logs) and contain.
Task 10: Review Docker daemon logs for remote API access
cr0x@server:~$ sudo journalctl -u docker --since "6 hours ago" | tail -n 20
time="2026-01-03T01:12:43.991Z" level=info msg="API listen on [::]:2375"
time="2026-01-03T01:12:44.120Z" level=info msg="POST /v1.45/containers/create"
time="2026-01-03T01:12:44.348Z" level=info msg="POST /v1.45/containers/f9a2d1c3ab11/start"
What it means: You have a timestamped record of container creation via the API.
Decision: preserve logs. If you have centralized logging, ensure retention and export for incident review.
Task 11: Check who has access to the Docker socket locally
cr0x@server:~$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Jan 3 00:01 /var/run/docker.sock
What it means: Members of the docker group can control the daemon.
Decision: keep docker group membership minimal and audited; treat it like sudo access.
Task 12: Enumerate docker group membership
cr0x@server:~$ getent group docker
docker:x:999:ci-runner,alice
What it means: Two users effectively have root-equivalent control via Docker.
Decision: if you don’t have a strong reason, remove humans from this group and force privileged ops through controlled automation.
Task 13: Immediate containment — block TCP 2375 locally (iptables)
cr0x@server:~$ sudo iptables -I INPUT -p tcp --dport 2375 -j DROP
cr0x@server:~$ sudo iptables -L INPUT -n --line-numbers | head
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2375
What it means: You’ve stopped the bleeding at the host level (not a substitute for edge firewall fixes).
Decision: keep this rule until you’ve corrected daemon config and verified exposure is gone from outside.
Task 14: Correct the daemon to remove the TCP listener (systemd override)
cr0x@server:~$ sudo systemctl edit docker
# Creates /etc/systemd/system/docker.service.d/override.conf
cr0x@server:~$ sudo cat /etc/systemd/system/docker.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
What it means: You’ve overridden the unit to drop the TCP host argument.
Decision: reload systemd and restart Docker; then re-check listening sockets and external reachability.
Task 15: Restart Docker safely and confirm sockets
cr0x@server:~$ sudo systemctl daemon-reload
cr0x@server:~$ sudo systemctl restart docker
cr0x@server:~$ sudo ss -lntp | grep -E ':(2375|2376)\s' || echo "no docker tcp listener"
no docker tcp listener
What it means: The daemon is no longer listening on TCP.
Decision: now verify from an external network. Don’t trust local checks alone.
Task 16: If you truly need remote access, enforce mutual TLS on 2376
If your answer is “but our automation needs it,” fine. You still don’t get to use plaintext.
With Docker, “TLS enabled” is meaningless if you don’t require client certificate authentication.
cr0x@server:~$ sudo cat /etc/docker/daemon.json
{
"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2376"],
"tlsverify": true,
"tlscacert": "/etc/docker/pki/ca.pem",
"tlscert": "/etc/docker/pki/server-cert.pem",
"tlskey": "/etc/docker/pki/server-key.pem"
}
What it means: The daemon will require a client cert signed by your CA.
Decision: only allow 2376 from a management network, and distribute client keys like you distribute SSH keys: minimally, rotated, and logged.
Task 17: Test TLS from a client with a cert (should succeed)
cr0x@server:~$ docker --host tcp://198.51.100.10:2376 \
--tlsverify \
--tlscacert ./ca.pem \
--tlscert ./client-cert.pem \
--tlskey ./client-key.pem \
version
Client: Docker Engine - Community
Version: 26.1.0
API version: 1.45
Server: Docker Engine - Community
Engine:
Version: 26.1.0
API version: 1.45 (minimum version 1.24)
What it means: Your client can authenticate and talk to the daemon.
Decision: proceed only if you can also prove unauthenticated access fails.
Task 18: Test unauthenticated access to 2376 (should fail)
cr0x@server:~$ curl -s https://198.51.100.10:2376/version | head
Client sent an HTTP request to an HTTPS server.
What it means: Plain HTTP is rejected. Now try HTTPS without client cert.
Decision: if HTTPS without client cert works, you’re still exposed.
cr0x@server:~$ curl -sk https://198.51.100.10:2376/version | head
TLS handshake error: remote error: tls: bad certificate
What it means: The daemon requires a valid client cert.
Decision: this is the minimum bar for “remote Docker API” not being an open root portal.
Task 19: Look for suspicious persistence on the host (systemd)
cr0x@server:~$ systemctl list-unit-files --type=service | grep -E 'docker|container|update|agent' | tail
docker.service enabled
containerd.service enabled
system-update.service disabled
What it means: This is a light-touch check. It won’t catch everything, but it often catches lazy persistence.
Decision: if you suspect compromise, follow with file integrity checks and a rebuild plan.
Hardening patterns that actually hold up in production
1) The correct default: no TCP socket, Unix socket only
The cleanest control is not to have a remote control plane at all. Use SSH to reach the host, then talk to the Unix socket locally.
Yes, it’s less “cloud-native.” It’s also less “criminal-friendly.”
If you need remote orchestration, consider using tooling that doesn’t require exposing the daemon broadly. Many teams use SSH tunnels
for the rare cases where remote API access is required temporarily.
2) If you must expose Docker: mutual TLS, tight network scoping, and short-lived credentials
Mutual TLS is non-negotiable. Not “TLS with a server cert.” Not “basic auth behind a proxy.” Client certificates, signed by a CA you control,
rotated like any other credential.
Then scope access. “Only our office IPs” is not scoping; it’s wishful thinking with a side of VPN fragility. You want:
- Inbound allowed only from a dedicated management subnet
- Explicit deny from everywhere else
- Security group rules reviewed like code (because they are production behavior)
3) Don’t use a generic reverse proxy in front of the Docker API unless you really know what you’re doing
Putting Docker’s HTTP API behind a reverse proxy sounds tidy until someone adds a permissive route, disables client auth “just for testing,”
or logs sensitive headers. Also: the Docker API is not designed as a public web app. It’s a control plane.
If you insist on a proxy, it needs mutual TLS end-to-end, strict allowlists of endpoints, and logging that’s useful for incident response
without spilling secrets. Most proxies in real life end up being a complicated way to be insecure.
4) Treat docker group membership as privileged access
This is where corporate reality bites. People add CI runners, developers, and “temporary contractors” to the docker group because it’s faster than
figuring out proper privilege boundaries. That’s how you get an internal lateral movement path that bypasses sudo auditing.
Your goal: keep daemon access behind automation that has approvals and logs, or behind root shells with MFA.
Everything else is just distributed root with extra steps.
5) Bake detection into your baseline
You can prevent exposure and still want detection because reality is messy. Baseline checks that catch the worst mistakes:
- Alert if
dockerdbinds to0.0.0.0:2375or:::2375 - Alert on firewall rules allowing inbound 2375/2376 from non-management CIDRs
- Alert on new privileged containers or containers mounting
/ - Track container create/start events centrally
Three corporate mini-stories from the trenches
Mini-story 1: The incident caused by a wrong assumption
A mid-sized SaaS company had a staging environment that looked “internal” because it ran in a cloud VPC. The team assumed VPC meant private.
It didn’t. One of the nodes had a public IP for convenience, and a security group rule that allowed inbound 2375 from “Anywhere”
because a contractor needed to run a one-off migration.
Nobody removed the rule. The contractor left. The ticket got buried under a quarterly reorg. A month later the node started running hot,
then got throttled by the cloud provider. The initial hypothesis was “bad deploy” because it usually is.
They rolled back. The CPU stayed pinned.
The on-call engineer finally ran docker ps and saw a container with a nonsense name and a recent creation timestamp.
They killed it. It came back. They killed it again. It came back again. That was the day they learned the daemon was reachable
from outside and the attacker was just re-posting “create container” requests.
Containment was quick: block 2375 at the edge, stop Docker, snapshot the disk for analysis. The painful part was credential rotation.
The staging node had access to shared container registry credentials and a few “temporary” API keys that were also used in dev.
The compromise didn’t jump environments, but the blast radius assessment burned a week.
The wrong assumption wasn’t “Docker is insecure.” The wrong assumption was “internal network == safe by default.”
In cloud networks, internal is a policy you continuously enforce, not a vibe.
Mini-story 2: The optimization that backfired
A large enterprise ran a fleet of build servers. Builds were slow, so an engineer tried to speed them up by letting build jobs talk
directly to a remote Docker daemon over TCP. The theory: avoid nested virtualization and reduce local disk churn.
It worked. Build times improved noticeably.
Then came the “minor simplification.” Instead of dealing with TLS certificates in the CI system, they temporarily switched to 2375
with a plan to “lock it down later,” protected only by IP allowlists. That allowlist lived in a firewall config managed by a separate team.
Change requests took days. So they opened it a bit more. And a bit more. Eventually, “just for a week,” it was reachable from a broad corporate range.
What went wrong wasn’t that a random attacker found it immediately (though that happens plenty). What went wrong was internal lateral movement.
A compromised developer laptop connected to the corporate VPN. From there, it could reach the Docker daemon.
The attacker didn’t need domain admin. They needed one reachable daemon and a way to run containers that could mount secrets from build caches.
The backfire was brutal because the pipeline was now a credential aggregation point: registry tokens, signing keys, dependency proxies.
Even without a full host breakout, reading workspace volumes was enough to do real damage.
The post-incident fix wasn’t “faster builds.” It was “build isolation that doesn’t involve a remotely accessible root API.”
Joke #2: “We’ll add TLS later” is the security equivalent of “I’ll start backing up tomorrow” — a beautiful plan that never survives reality.
Mini-story 3: The boring but correct practice that saved the day
Another company had a strict baseline: Docker daemons were not allowed to listen on TCP, period.
If a team needed remote control, they used SSH to a bastion and a short-lived tunnel to the Unix socket, with session recording.
Everyone hated it a little. Which is how you know it was probably doing something useful.
One afternoon, a monitoring alert fired: outbound connections spiking from a production node to unfamiliar IP ranges.
The immediate suspicion was “exposed Docker API” because the security team had seen that movie before.
The on-call checked ss -lntp and saw nothing on 2375/2376. That ruled out one major class of failure quickly.
They pivoted. Turns out it was a compromised application container calling out, not a compromised host.
Since the daemon wasn’t remotely accessible, the attacker couldn’t easily create privileged containers or mount the host filesystem.
The blast radius stayed inside the app’s permissions and secrets.
The response was still serious: rotate app secrets, patch the vulnerable dependency, redeploy.
But they avoided the “rebuild the node fleet and rotate everything” nightmare. The boring baseline didn’t prevent every problem.
It prevented the problem from becoming a catastrophe.
Common mistakes: symptoms → root cause → fix
1) Symptom: CPU pinned, strange containers, unexplained network egress
Root cause: Docker daemon exposed on 2375; attacker running miner containers and recreating them after deletion.
Fix: block inbound 2375/2376 immediately, remove TCP listener from dockerd, rebuild host if privileged containers mounted the host.
2) Symptom: “We use TLS” but anyone can still connect
Root cause: server-side TLS only; tlsverify not enabled; client cert authentication not enforced.
Fix: set "tlsverify": true and require client certs signed by your CA; confirm unauthorized curl -sk fails.
3) Symptom: Docker is “not listening,” but remote access still works
Root cause: a sidecar proxy or another process is forwarding to the Unix socket; or a different daemon instance is bound via systemd drop-in.
Fix: check systemctl cat docker for overrides, search for proxy configs, and verify actual sockets with ss -lntp.
4) Symptom: Developers can “just run Docker commands” without sudo
Root cause: users are in the docker group, which is effectively root on that host.
Fix: remove unnecessary users from the group; enforce privileged container operations via controlled automation or sudo with auditing.
5) Symptom: Repeated “container came back” after removal
Root cause: an external actor is calling the Remote API to recreate containers; or you have a compromised orchestrator/CI doing it.
Fix: cut network access to the daemon first, then investigate orchestration credentials and logs.
6) Symptom: “We only opened it to the VPC” and still got hit
Root cause: the VPC isn’t as private as you think: public IP attached, peering, VPN, misrouted security group, or an internal host got compromised.
Fix: assume internal networks are hostile; restrict to management subnets, require mutual TLS, and segment build systems from general access.
Checklists / step-by-step plan
Step-by-step: lock down a host today
-
Find listeners: run
ss -lntpand confirm nothing binds to 2375/2376 on public interfaces. -
Confirm service config: check
systemctl cat dockerand/etc/docker/daemon.jsonfortcp://hosts. - Block at the edge: remove inbound security group / firewall rules for 2375 and 2376 unless you have a management network and mutual TLS.
- Block locally for defense-in-depth: drop inbound 2375 with iptables/nftables (temporary containment and safety net).
-
Reduce privilege spread: audit the
dockergroup; remove users who don’t need it. -
Enable monitoring: alert on new privileged containers, containers mounting
/, and any new TCP listener fordockerd. - Document an exception process: if a team truly needs remote Docker, require mutual TLS + CIDR restrictions + expiration date on firewall rules.
Step-by-step: if you suspect exposure already happened
- Contain: block inbound access to Docker API immediately (edge + local). Don’t debate it.
- Preserve evidence: export
journalctl -u docker,docker ps -a,docker inspectfor suspicious containers, and system logs. - Assess host compromise: if you find privileged containers with host mounts, assume host-level tampering.
- Rotate credentials: registry tokens, cloud instance role credentials (if applicable), app secrets on disk, CI credentials used on that node.
- Rebuild cleanly: prefer rebuilding the node from a known-good image over “cleaning up.” Cleanup is how you miss persistence.
- Close the loop: add detection so the same exposure doesn’t quietly return during the next “temporary” change.
A policy stance that works
- Ban plaintext Docker TCP (2375) entirely. No exceptions.
- Allow TLS Docker TCP (2376) only with mutual TLS and strict network scoping.
- Treat Docker daemon access as production root. Because it is.
FAQ
1) Is exposing port 2375 ever acceptable?
No. Not “rarely,” not “behind a firewall,” not “just for a week.” Plaintext unauthenticated Docker API is remote root by design.
If you need remote control, use mutual TLS on 2376 and tight network restrictions—or better, don’t expose it at all.
2) If I bind Docker to 127.0.0.1, am I safe?
Safer, yes. Safe, not automatically. Local-only binds reduce remote exposure, but any local compromise (including SSRF from a web app that can reach localhost)
can still abuse it. Treat it as sensitive even on loopback.
3) Why is the Docker API considered root-equivalent?
Because it can start privileged containers, mount host paths, and manipulate networking. Those are host-admin actions.
If the daemon can do it, and the API can instruct the daemon, then the API is host-admin.
4) Does using Docker rootless mode fix this?
It reduces the blast radius of daemon compromise, but it doesn’t make an exposed API acceptable.
An attacker can still run workloads, steal that user’s secrets, and potentially pivot through credentials and network access.
5) We run Kubernetes; does this still apply?
Yes. Many Kubernetes nodes run container runtimes that are not Docker, but you may still have Docker installed for legacy workloads,
debugging, or CI. Any exposed daemon on a node is an attacker’s foothold into the cluster environment.
6) Can I put basic auth in front of the Docker API with a reverse proxy?
You can, but you shouldn’t call it “secure” unless you also have strong transport security, strict endpoint allowlisting, and a credible credential rotation story.
Mutual TLS is the standard because it’s harder to accidentally weaken.
7) What’s the fastest way to know if we’re exposed right now?
From outside your network, try fetching /version on 2375. If it responds, you’re exposed.
Internally, check ss -lntp and your firewall/security group rules for 2375/2376.
8) If we were exposed, do we really need to rebuild the host?
If attackers ran privileged containers with host mounts or you can’t prove they didn’t, rebuilding is the sane choice.
“We deleted the container” is not a guarantee. Persistence is cheap.
9) How do I keep automation working without exposing Docker?
Run automation on the host (or via SSH), use a bastion with audited access, or move build workflows to isolated builders where the Docker socket never faces untrusted networks.
If remote control is mandatory, use mutual TLS and management subnets.
Conclusion: what to do next
The Docker Remote API is a power tool. Leave it on a public bench and somebody will build a shed you didn’t order.
Your job is to make “accidentally exposed root” structurally difficult.
Practical next steps:
- Scan your fleet for listeners on 2375/2376 and for
dockerdarguments that includetcp://. - Kill plaintext 2375 everywhere. If you find it, treat it as a security incident until verified otherwise.
- If remote Docker is truly needed, require mutual TLS, restrict by management CIDR, and rotate certs on a schedule you can defend.
- Audit
dockergroup membership like it’s sudo, because it effectively is. - Write the runbook now, before the miner shows up.