Nothing says “enjoy your Friday” like a deploy pipeline that worked for months and suddenly trips over: Text file busy. Same code, same hosts, same CI job. Now the release fails halfway through a file copy and leaves your service in that special state where it’s both “up” and “not exactly the version you wanted.”
This is one of those Linux errors that feels petty until you understand what the kernel is protecting you from. Once you do, you’ll stop trying to “force overwrite the binary” (please don’t) and you’ll switch to deploy patterns that don’t fight the OS.
What “Text file busy” actually means (ETXTBSY)
On Linux, Text file busy is typically the user-space face of ETXTBSY. The kernel returns it when a process tries to modify an executable file that is currently being executed (or, in some cases, opened in a way that indicates it’s being used as program text).
“Text” is historical Unix vocabulary for the executable code segment. “Busy” means “someone is executing it, don’t scribble over it.”
There’s an important nuance: Linux normally lets you unlink (delete) or rename executables while they are running. The running process keeps an open file reference; the directory entry can go away. That’s the classic Unix trick. But Linux is far less enthusiastic about you writing new bytes into the same inode of a running executable. That’s where ETXTBSY comes in. It’s the OS saying: “you can replace; you can’t mutate in place.”
If your deploy strategy is “copy the new binary on top of the old path,” you are doing in-place mutation. Sometimes it works; sometimes it doesn’t; sometimes it only fails when you least want it to. And Debian 13, with modern kernels and common filesystems, will happily enforce the boundary.
One quote worth keeping on a sticky note near your deploy scripts: paraphrased idea: “hope is not a strategy”
— attributed in ops circles to Gene Kranz, usually as a warning against wish-based engineering. If your release relies on “hopefully the file isn’t being executed when I overwrite it,” you are running on hope.
What it looks like in real life
You’ll see it in a few common forms:
cp: cannot create regular file '...': Text file busymv: cannot move 'new' to 'old': Text file busy(less common, but shows up with certain FS semantics)bash: ./mybin: Text file busy(when a script or automation tries to execute something being replaced)- Package upgrades failing when a maintainer script tries to replace an in-use binary in a non-atomic way
Joke #1: ETXTBSY is Linux politely telling you “I see what you’re trying to do, and I’m choosing violence against your deploy script.”
Why deploys fail on Debian 13: the real mechanisms
This error isn’t “a Debian thing.” It’s a Unix and Linux thing that becomes visible when your deploy method is incompatible with how the kernel, filesystem, and runtime loader behave.
Mechanism 1: in-place overwrite of a running executable
The canonical failure mode is painfully simple:
- Service is running
/opt/myapp/myapp. - Deploy does
cp myapp /opt/myapp/myappor downloads into the same file path. - Kernel refuses the write/replace because the file is “busy” as executable text.
Some tools are sneakier than they look. A “safe” copy utility might open the destination for writing and truncate it before copying. That’s exactly the kind of in-place mutation ETXTBSY is designed to stop.
Mechanism 2: bind mounts and container volume semantics
If you bind-mount a host directory into a container, you often deploy by writing into that directory from the host (or from another container). The app process inside the container has the file open/executing, so the host-side replacement can hit ETXTBSY. The container boundary doesn’t change kernel semantics; it just changes who gets blamed.
Mechanism 3: shared libraries, loaders, and “it’s not just the binary”
Sometimes the thing you’re overwriting isn’t the main executable. It’s a plugin, a shared library, or a helper tool invoked during deploy.
Linux is generally fine with updating shared libraries via atomic replace (new file, rename into place). But if your deploy tool writes directly into .so files in place, you can trigger the same busy behavior. Also, a common self-own is replacing /usr/bin/python (or a runtime) while a maintainer script is still using it. That’s not theoretical; it’s how you get upgrade scripts exploding mid-flight.
Mechanism 4: filesystem edge cases (NFS, overlayfs, “helpful” network storage)
Local filesystems (ext4, xfs) behave predictably: rename is atomic; unlink semantics are stable; ETXTBSY enforcement matches Linux expectations. Network filesystems and overlay layers can introduce extra checks or slightly different semantics. NFS in particular has a history of “silly rename” behavior and cache coherency complexity; it can turn a clean replace into a weird busy condition or delayed visibility.
This matters because a lot of “Debian 13 deploys” are really “Debian 13 on top of a new storage layer we introduced quietly.” When you see ETXTBSY, always ask: did the storage or runtime environment change?
Mechanism 5: your deploy pipeline accidentally executes the file while replacing it
Here’s a more embarrassing one: the deploy script checks the version by running myapp --version from the same path it is about to overwrite. If the deploy script runs that binary while another step starts overwriting it, congratulations: you raced yourself.
Joke #2: The fastest way to reproduce ETXTBSY is to schedule your deploy for the exact moment you promised Sales it would be “a quick change.”
Fast diagnosis playbook
If you’re on-call and the pipeline is red, do this in order. The goal is to identify which process is holding the executable (or library) open, and whether your deploy method is doing in-place writes.
1) Identify the exact path that triggered ETXTBSY
- From CI logs: capture the file path in the error line.
- From system logs: identify the failing command and target.
2) Find who is using that file (process + PID)
lsoforfuseron the path.- Confirm whether it’s an executable mapping (
txt) or just an open file descriptor.
3) Determine the deploy pattern: atomic replace vs in-place overwrite
- If you see
cpstraight into the final path, you’re overwriting in place. - If you see
rsyncinto a live directory, you might be rewriting in place depending on flags. - If you see “write temp then rename” and still get ETXTBSY, suspect filesystem/container edge cases.
4) Decide the least risky immediate fix
- If the service can restart: stop/restart service, then redeploy using atomic swap.
- If restart is risky: deploy to a new release directory, switch via symlink, then restart gracefully (or use socket activation / handoff).
- If it’s a package upgrade: schedule restart, or use
needrestartworkflow; don’t brute-force file writes.
5) Apply the permanent fix
- Change deploy to immutable release directories + atomic pointer swap.
- Or move the executable to a versioned path and keep a stable symlink.
- Or ensure the file being replaced is never modified in place (download to temp + rename).
Practical tasks: commands, output, and the next decision
These are field-tested tasks. Each includes a command, example output, what it means, and the decision you make next. Run them as a user with enough privileges (root or via sudo) on the affected host.
Task 1: Confirm the failing syscall is ETXTBSY (strace the failing step)
cr0x@server:~$ strace -f -o /tmp/deploy.strace cp -f ./myapp /opt/myapp/myapp
cp: cannot create regular file '/opt/myapp/myapp': Text file busy
cr0x@server:~$ tail -n 5 /tmp/deploy.strace
openat(AT_FDCWD, "/opt/myapp/myapp", O_WRONLY|O_CREAT|O_TRUNC, 0666) = -1 ETXTBSY (Text file busy)
write(2, "cp: cannot create regular file"..., 72) = 72
exit_group(1) = ?
+++ exited with 1 +++
Meaning: Your deploy tool is trying to open the destination with O_TRUNC (in-place overwrite). Kernel blocks it due to execution mapping.
Decision: Stop doing in-place overwrite. Switch to “write temp then rename” or release-dir swap.
Task 2: Find which process is executing or mapping the file (lsof)
cr0x@server:~$ sudo lsof /opt/myapp/myapp | head
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
myapp 14231 myapp txt REG 259,2 18239440 1310723 /opt/myapp/myapp
Meaning: PID 14231 is executing /opt/myapp/myapp (FD type txt).
Decision: You can’t overwrite that inode. Either restart/stop that process, or deploy a new inode and swap pointers.
Task 3: Confirm all processes referencing the inode (fuser with verbose)
cr0x@server:~$ sudo fuser -v /opt/myapp/myapp
USER PID ACCESS COMMAND
/opt/myapp/myapp: myapp 14231 ...e. myapp
Meaning: ...e. indicates execute access.
Decision: If multiple PIDs show up, you’re dealing with multi-worker or a supervisor (systemd, gunicorn, etc.). Plan restart accordingly.
Task 4: Check whether the binary path is a symlink (and what it points to)
cr0x@server:~$ readlink -f /opt/myapp/myapp
/opt/myapp/releases/2025-12-30_121500/myapp
Meaning: The stable path is a symlink into a release directory. This is good news: you can swap the symlink atomically.
Decision: Deploy the new release into a new directory, then switch the symlink. Do not touch the existing release directory in place.
Task 5: Inspect the service unit for ExecStart and WorkingDirectory (systemd)
cr0x@server:~$ systemctl cat myapp.service
# /etc/systemd/system/myapp.service
[Unit]
Description=MyApp API
[Service]
User=myapp
WorkingDirectory=/opt/myapp/current
ExecStart=/opt/myapp/current/myapp --config /etc/myapp/config.yaml
Restart=on-failure
[Install]
WantedBy=multi-user.target
Meaning: The service runs from /opt/myapp/current. If your deploy modifies that directory in place, you risk ETXTBSY (and partial updates).
Decision: Make /opt/myapp/current a symlink to an immutable release dir, and only change the symlink.
Task 6: Determine whether the running process is using a deleted file (classic “unlinked but still running”)
cr0x@server:~$ sudo ls -l /proc/14231/exe
lrwxrwxrwx 1 root root 0 Dec 30 12:18 /proc/14231/exe -> /opt/myapp/releases/2025-12-30_121500/myapp (deleted)
Meaning: You replaced/unlinked the file already, but the process still runs the old inode (now deleted). This is normal Unix behavior.
Decision: Plan a restart if you need the new version active. Do not try to “fix” by overwriting; it won’t affect the running process.
Task 7: Verify the inode number and see whether deploy attempts to rewrite the same inode
cr0x@server:~$ stat -c 'inode=%i path=%n' /opt/myapp/myapp
inode=1310723 path=/opt/myapp/myapp
cr0x@server:~$ sudo ls -li /opt/myapp/myapp
1310723 -rwxr-xr-x 1 myapp myapp 18239440 Dec 30 12:15 /opt/myapp/myapp
Meaning: You have a specific inode. If your deploy does in-place writes, it targets this inode and triggers ETXTBSY.
Decision: Ensure deploy creates a new inode (write to a different filename, then rename/symlink swap).
Task 8: Check how your deploy tool behaves (rsync flags matter)
cr0x@server:~$ rsync --version | head -n 2
rsync version 3.2.7 protocol version 32
cr0x@server:~$ rsync -av --inplace ./myapp /opt/myapp/myapp
rsync: [receiver] open "/opt/myapp/myapp" failed: Text file busy (26)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(772) [Receiver=3.2.7]
Meaning: --inplace is the smoking gun. It forces in-place modification.
Decision: Remove --inplace. Prefer --delay-updates and deploy to a staging path, or use release directories.
Task 9: Determine whether the filesystem is overlayfs / container layer (mount)
cr0x@server:~$ mount | grep -E '/opt/myapp|overlay'
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/...,upperdir=/var/lib/docker/overlay2/.../diff,workdir=/var/lib/docker/overlay2/.../work)
tmpfs on /run type tmpfs (rw,nosuid,nodev,size=328284k,mode=755)
Meaning: You’re in an overlayfs environment (common in containers). Some operations behave differently, and “replace in place” is still a bad idea.
Decision: Don’t mutate executables inside a running container filesystem. Build a new image and redeploy, or use a release-dir pattern inside a writable volume with atomic pointer swap.
Task 10: If it’s system packages, see what dpkg tried to do (dpkg logs)
cr0x@server:~$ sudo tail -n 8 /var/log/dpkg.log
2025-12-30 12:04:41 upgrade nginx:amd64 1.26.0-1 1.26.2-1
2025-12-30 12:04:41 status half-configured nginx:amd64 1.26.2-1
2025-12-30 12:04:41 configure nginx:amd64 1.26.2-1
2025-12-30 12:04:42 status installed nginx:amd64 1.26.2-1
Meaning: dpkg upgrade occurred; if you saw ETXTBSY around this time, it may have been during maintainer scripts or a postinst restart attempt.
Decision: If package upgrades touch in-use components, coordinate restarts and avoid running long-lived deploy jobs during apt upgrades.
Task 11: Check for pending restarts (needrestart) and interpret what it tells you
cr0x@server:~$ sudo needrestart -r l
NEEDRESTART-VER: 3.6
Processes using old versions of upgraded files:
14231 /opt/myapp/current/myapp
Service restarts suggested:
systemctl restart myapp.service
Meaning: A process is still using an old mapped file. This is the “you replaced it, but it’s still running” variant.
Decision: Plan a controlled restart. If zero-downtime matters, roll instances one by one behind a load balancer.
Task 12: Validate atomic symlink swap behavior (ln -sfn + readlink)
cr0x@server:~$ ls -l /opt/myapp
lrwxrwxrwx 1 root root 38 Dec 30 12:15 current -> /opt/myapp/releases/2025-12-30_121500
drwxr-xr-x 4 root root 4096 Dec 30 12:15 releases
cr0x@server:~$ sudo ln -sfn /opt/myapp/releases/2025-12-30_123000 /opt/myapp/current
cr0x@server:~$ readlink -f /opt/myapp/current
/opt/myapp/releases/2025-12-30_123000
Meaning: The current pointer moved instantly. Existing processes keep running the old inode; new starts use the new target.
Decision: This is the deployment primitive you want. Combine with a restart or graceful reload.
Task 13: Prove the running binary version vs on-disk version (checksum via /proc)
cr0x@server:~$ sha256sum /opt/myapp/current/myapp
b1c5e3e2f4cbd9b5e6f3d5b2b5f5a9d6b9c8a7d5a3b2c1d0e9f8a7b6c5d4e3f2 /opt/myapp/current/myapp
cr0x@server:~$ sudo sha256sum /proc/14231/exe
9a8b7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a0b9c8d7e6f5a4b3c2d1e0f9a8b /proc/14231/exe
Meaning: The running executable differs from what’s currently pointed to on disk. Your rollout hasn’t taken effect for that PID.
Decision: Restart or roll the process pool. Don’t keep redeploying; it won’t change running memory mappings.
Task 14: If you suspect a race, catch who launches the binary during deploy (audit via ps and timestamps)
cr0x@server:~$ ps -eo pid,lstart,cmd | grep -E '/opt/myapp/current/myapp' | grep -v grep
14231 Tue Dec 30 12:18:02 2025 /opt/myapp/current/myapp --config /etc/myapp/config.yaml
Meaning: The process start time aligns with deploy time. This often means your deploy script or supervisor restarted it mid-copy.
Decision: Make deploy steps idempotent and serialized: stage release, switch pointer, then restart (or reload) once, not repeatedly.
Safe fixes that don’t turn deploys into roulette
Fixing ETXTBSY in production is less about “killing the process that holds the file” and more about adopting a deployment primitive the kernel likes. You want to stop treating a running executable like a mutable blob.
Fix 1: Use immutable release directories + atomic pointer (symlink) swap
This is the pattern I recommend most often because it’s boring, and boring is the highest compliment you can pay a deploy system.
Layout:
/opt/myapp/releases/<release-id>/myapp(immutable)/opt/myapp/current -> /opt/myapp/releases/<release-id>(symlink pointer)- systemd service runs
/opt/myapp/current/myapp
Deploy steps:
- Upload to a new release directory (not used yet).
- Health-check the release directory binary directly.
- Switch
currentsymlink atomically. - Restart or reload the service (ideally graceful).
Why it works: the running process keeps using its old inode. The new release is a different inode in a different directory. No in-place writes. No ETXTBSY. Rollback is also a single symlink change.
Fix 2: Write to a temp file, fsync, then rename (atomic replace)
If you absolutely must keep the same path (e.g., a third-party tool expects it), do a safe replace:
- Download/write to
myapp.newin the same directory. chmodand verify checksums.mv -f myapp.new myapp(rename is atomic on the same filesystem).
Important: rename is atomic, but it’s not magic. If you’re replacing a running executable, rename is typically allowed and safe, because you’re swapping directory entries, not mutating the inode. The running process still uses the old inode.
But do not combine this with tools that do in-place updates (cp to the final path, rsync --inplace).
Fix 3: Stop deploying into live, shared directories
A common anti-pattern is deploying into /usr/local/bin or a shared app directory where multiple services pick up helpers, plugins, or runtimes. You “just update the helper,” and suddenly the API deploy fails with ETXTBSY because the helper is executed during deploy.
Keep application artifacts private per service, versioned, and replace via pointer swap. Shared directories should be reserved for truly system-managed packages, updated under controlled maintenance windows.
Fix 4: systemd patterns that play well with rollouts
systemd isn’t the cause, but it can amplify races if your deploy triggers restarts at awkward times.
- Use
ExecStartpointing at the stable symlink path. - Use
ExecReloadfor graceful reload if your app supports it. - Consider
Restart=on-failure(notalways) to reduce flapping during deploy errors. - If you have multiple workers, consider socket activation or a front proxy to decouple restart from client impact.
Fix 5: Container world: rebuild images, don’t patch executables in place
If you’re running containers and you’re “deploying” by copying a new binary into a live container, you are manually reinventing the worst parts of configuration drift.
Preferred:
- Build a new image with the new binary.
- Roll out the new image (rolling update).
- Use immutable tags or digests in production, not “latest.”
If you must use a mounted volume for hot-swapping artifacts, use the release-dir + symlink swap pattern inside the volume. Do not do in-place overwrite from host or sidecar.
Fix 6: Storage considerations (because “busy” can hide a storage story)
As a storage engineer, I’ll say the quiet part: ETXTBSY often shows you that your deployment expects local filesystem semantics, but you gave it something else.
- On NFS, ensure your deploy artifact directory is on a local disk if at all possible.
- If you must use NFS: avoid in-place updates, prefer versioned directories, and ensure consistent mount options across fleet.
- On overlayfs: treat the container filesystem as immutable; use volumes for mutable data.
Three corporate-world mini-stories (how this bites teams)
Mini-story #1: The incident caused by a wrong assumption
They ran a small fleet of Debian servers behind a load balancer. The deploy process was “simple”: copy the new binary into /opt/app/app, then send a signal for a reload. It worked for ages, which is how bad assumptions get promoted to “design decisions.”
One quarter they introduced a background job runner on the same hosts. It used the same binary, invoked with a different flag. The runner was supervised by systemd and restarted aggressively on failure. During deploy, the pipeline copied the binary while the web service was running. That sometimes failed, but not always. Then it got worse: the runner restarted mid-deploy and attempted to exec the binary at the same time the deploy was truncating it.
Result: ETXTBSY in the deploy logs, plus occasional crashes when a process managed to exec a partially updated binary (because some steps ran as different users and not all hosts were consistent). They blamed Debian. Debian was innocent; it was doing its job.
The fix was not “retry until it works.” They moved to immutable release directories. The web service and runner both executed via /opt/app/current/app symlink. Deploy created a fresh release dir, flipped the symlink, and restarted services in a controlled order. The wrong assumption was “overwriting the file is equivalent to replacing the running program.” It’s not.
Mini-story #2: The optimization that backfired
A platform team wanted faster deploys and less disk usage. Someone noticed that copying full release directories consumed space and time. They replaced the release-dir approach with an “optimized rsync” into a single shared directory, using --inplace to avoid temporary files and reduce write amplification.
It benchmarked beautifully on a quiet test VM. In production, deploys started failing with ETXTBSY. Worse, they got subtle corruption-like behavior during high load: some hosts briefly had a mixture of old and new assets because rsync updated files in an order that didn’t match the app’s runtime expectations.
The team responded with retries and longer timeouts. Deploy times got longer. Failures became rarer but more mysterious. The load balancer was healthy, but users saw intermittent errors because different nodes served different versions during the partial sync window.
They rolled back the “optimization” and went back to versioned releases. Disk usage went up, predictably. Reliability also went up, which was the only number that mattered during incident review. The lesson: if your “optimization” removes atomicity, it’s not an optimization. It’s debt with better PR.
Mini-story #3: The boring but correct practice that saved the day
A different org ran Debian 13 with strict change management. Their deploy pipeline always staged artifacts into a new directory named with a release ID. It then ran a canary check that executed the binary from the staged path, not from the live symlink. Only after passing did it flip the symlink.
One day, a routine OS upgrade pulled in a new runtime library. A few services now needed a restart to pick up the new library mappings. The team didn’t notice immediately, because everything kept running. But later, during a deploy, their sanity checks compared the running process checksum (via /proc/<pid>/exe) with the staged artifact. It didn’t match, which was expected. The important bit: the deploy still succeeded because nothing tried to overwrite the live binary in place.
During the maintenance window, they restarted services in a rolling manner. No ETXTBSY, no broken upgrades, no “why did the package manager explode.” The practice that saved them was painfully unsexy: never mutate live executables; always swap pointers; always be able to roll back by changing one thing.
They didn’t “fix” ETXTBSY because they rarely triggered it. That’s the dream state: prevent the class of failure instead of getting good at fighting it.
Common mistakes: symptom → root cause → fix
1) Symptom: cp fails with “Text file busy” when copying a binary
Root cause: You’re overwriting the inode of a running executable (copy opens destination with truncate/write).
Fix: Copy to a new filename and rename, or deploy into a new release directory and flip a symlink.
2) Symptom: rsync error code 26 or 3 with “Text file busy”
Root cause: rsync is configured to update in place (--inplace) or is targeting live paths that include executables.
Fix: Remove --inplace. Use --delay-updates to stage updates and then atomically move temp files, or better: release directories.
3) Symptom: deploy “succeeds,” but the running service is still the old version
Root cause: You replaced the on-disk file, but the process is still using the old inode (possibly now deleted). Classic Unix behavior.
Fix: Restart the service (rolling restart). Validate via /proc/<pid>/exe checksum or needrestart.
4) Symptom: error happens only in containers, not on bare metal
Root cause: You’re patching a live container filesystem or a bind-mounted volume while the binary is executing inside the container.
Fix: Build and deploy a new image. If using volumes for artifacts, use immutable releases and pointer swaps.
5) Symptom: ETXTBSY appears during package upgrades or unattended-upgrades
Root cause: A service is running while packages attempt to update executables or related helpers; maintainer scripts may execute tools mid-upgrade.
Fix: Coordinate upgrades with service restarts, avoid overlapping app deploys with apt runs, and use needrestart to manage restarts.
6) Symptom: Only some hosts fail, usually those under load
Root cause: Race window is load-dependent: slower IO means your overwrite window overlaps more with exec/restart events. Or the deploy scripts behave differently due to timing.
Fix: Eliminate the race: no in-place writes; only atomic pointer changes; serialize deploy actions; reduce “restart storms.”
7) Symptom: “mv: Text file busy” even though rename should be atomic
Root cause: Often indicates you’re not actually renaming within the same filesystem, or you’re on a filesystem with special semantics (overlayfs/NFS edge cases).
Fix: Ensure temp file is in the same directory (same mount). Check stat -f or mount and adjust deploy location.
Checklists / step-by-step plan
Immediate containment (during an incident)
- Freeze further deploy attempts to the same hosts (stop the flapping).
- Identify the path that triggered ETXTBSY and the command that did it.
- Use
lsoforfuserto identify PIDs using the file. - Decide whether you can restart safely:
- If yes: do a controlled restart (one host at a time if needed).
- If no: deploy new release directory and plan a graceful cutover.
- Verify running version via
/proc/<pid>/exechecksum or version output.
Permanent remediation (make it stop happening)
- Adopt one of these as policy:
- release directories + symlink swap (recommended)
- temp file + fsync + atomic rename (acceptable)
- image-based deploys for containers (best for containerized workloads)
- Audit deploy tooling for in-place writes:
- remove
--inplacefrom rsync - avoid
cpstraight into final executable path - avoid “download directly to final file” patterns
- remove
- Make restarts explicit:
- systemd restart in pipeline after pointer swap
- rolling restarts behind LB
- graceful reload where supported
- Add a deploy guardrail:
- refuse to deploy if
lsofshows you’re about to overwrite a running executable - refuse if target path is on NFS/overlay unless using release directories
- refuse to deploy if
- Operationalize rollback:
- keep N previous releases
- symlink back + restart
Verification checklist (prove the fix works)
- Deploy a new release while the service is running. Confirm no ETXTBSY.
- Confirm the symlink swapped and points to the new directory.
- Restart the service and confirm the running checksum matches the intended artifact.
- Roll back by swapping symlink to previous release; restart; confirm checksum rollback.
- Run two deploys back-to-back and ensure no partial state exists between them.
Interesting facts and historical context
- The error name ETXTBSY predates Linux: it comes from early Unix, where “text segment” was the formal term for executable code.
- Unix lets you delete running executables: a running process holds an open file reference, so the directory entry can disappear while execution continues.
- Rename is atomic (locally): on local POSIX filesystems, swapping directory entries with
rename()is atomic within the same filesystem, which is why pointer swaps are so effective. - In-place updates are historically tempting: admins did it to save space and avoid “duplicate binaries,” especially when disks were small and expensive.
- Network filesystems complicate invariants: NFS cache coherency and client-side behavior historically made “atomic and immediate” a more conditional promise than people assume.
- Containers didn’t change the kernel: namespaces isolate processes, but file execution semantics still follow the same kernel rules; ETXTBSY is not “a container bug.”
- Package managers learned hard lessons: dpkg and friends rely heavily on rename-based replaces and careful staging because overwriting live system binaries is a great way to brick upgrades.
- Running code doesn’t update itself: replacing the on-disk file does not patch the already-mapped pages in memory; you still need restart/reload semantics.
FAQ
1) Is “Text file busy” a Debian 13 bug?
No. It’s kernel behavior surfaced by your deployment method. Debian 13 just happens to be where your timing, workload, or storage layer made it visible.
2) Why does rm sometimes work but cp fails?
rm unlinks the directory entry; the running process keeps the inode open. cp overwrites/truncates the same inode, which the kernel blocks when it’s being executed.
3) Can I fix it by adding retries to the deploy?
You can reduce pager noise, sure. You won’t fix the underlying race, and you’ll eventually land on a host where the timing never lines up. Replace the deploy primitive instead.
4) If rename is safe, why did I see mv: Text file busy?
Usually because it wasn’t a true same-filesystem rename (cross-device move), or you’re on a filesystem layer with special semantics (overlayfs, NFS edge cases). Ensure the temp file is created in the same directory and mount.
5) Does this affect scripts too, or only binaries?
Mostly binaries, but scripts can trigger similar problems when executed through an interpreter that opens them in a way that trips busy checks, or when your deploy executes the script while rewriting it. Don’t mutate any “live” entrypoints in place.
6) How do I prove which process is blocking the deploy?
Use lsof <path> and look for txt mappings or open descriptors. fuser -v is also handy for a quick PID list.
7) Will stopping the service always fix it?
It will typically eliminate ETXTBSY for that file, yes. But stopping services as a deploy mechanism is a downtime strategy. Prefer atomic swaps plus controlled restarts for predictability.
8) What’s the safest “no surprises” deploy pattern on Debian?
Immutable release directories + atomic symlink swap + systemd-controlled restart (rolling across nodes). It avoids ETXTBSY and prevents partial deployments.
9) Do I need to restart after updating a binary if the path stays the same?
Yes, if you want the running process to use the new code. A running process doesn’t automatically re-map its executable pages just because the on-disk file changed.
10) What if I can’t restart (hard real-time or long-lived sessions)?
Then the fix is architectural: run multiple instances, drain connections, or use a supervisor/proxy that supports graceful handoff. ETXTBSY is a symptom; “no restarts allowed” is the actual constraint.
Conclusion: next steps you can do today
ETXTBSY isn’t Linux being difficult. It’s Linux enforcing a boundary your deploy process shouldn’t cross. Overwriting a live executable in place is like changing a tire while the car is doing highway speed: technically you can attempt it, but you won’t like the outcome.
Practical next steps:
- Find the exact path that triggers “Text file busy” and identify the PID holding it with
lsof. - Audit your pipeline for in-place writes (
cpto final path,rsync --inplace, direct downloads into live locations). - Pick one safe deployment primitive and standardize it:
- release directories + symlink swap (best all-around)
- temp + fsync + rename (if you must keep path)
- new container images (if you’re containerized)
- Make restarts deliberate and rolling, not accidental and racing your copy step.
- Add one guardrail: refuse to deploy if the target executable is currently mapped (
lsofshowstxt) and you’re about to overwrite it.
Do those five things, and “Text file busy” goes back to being an error you read about in other people’s postmortems. Which is where it belongs.