Debian/Ubuntu Web Root Permissions: Stop 403s Without 777 (Case #69)

Was this helpful?

Some outages don’t scream. They whisper. A quiet “403 Forbidden” in your browser, a support ticket with a screenshot, and a developer swearing they “didn’t touch anything.” Meanwhile the web server is standing at the door with a clipboard, refusing entry because you forgot a single execute bit on a parent directory.

If your fix muscle memory is chmod -R 777, take your hands off the keyboard. We can solve 403s on Debian/Ubuntu cleanly, predictably, and without turning your web root into a community garden.

Fast diagnosis playbook

When you have a 403, you’re not “debugging permissions.” You’re locating the first place the web server user cannot traverse or read. Do it in this order and you’ll stop flailing.

1) Confirm what the web server process user actually is

Don’t assume www-data. Debian/Ubuntu defaults often do, but containers, hardening, or systemd overrides can change it.

2) Identify the exact path being served

Is it the expected DocumentRoot? A symlink? An alias? A per-vhost root? A “helpful” redirect into someone’s home directory?

3) Read the error log line that corresponds to your request

403 has multiple flavors: filesystem permissions, directory listing disabled, authentication required, or policy modules (AppArmor/SELinux). The log tells you which one you bought.

4) Test access as the web server user, from the filesystem

If the process can’t traverse the parent directories, it doesn’t matter that the file itself is readable.

5) Only then change permissions—and change the minimum needed

Prefer group-based access or ACLs. Use the execute bit correctly. Avoid world-writable anything in a web root unless you enjoy incident retrospectives.

The permission model that actually causes 403s

403 is not a single bug; it’s a category

In Apache and Nginx, “403 Forbidden” is the server telling the client: “I understood the request, but I won’t serve it.” That can be because:

  • Filesystem access denied (most common).
  • Directory indexing is disabled and there is no index file.
  • Access rules deny the request (Apache Require, Nginx deny).
  • Authentication/authorization failure (sometimes 401, sometimes 403 depending on config).
  • Mandatory access control blocked it (AppArmor or SELinux).

This article is about the permissions angle: classic Unix mode bits, ownership, groups, umask, and ACLs, plus how they intersect with Debian/Ubuntu packaging choices.

The big gotcha: directory “execute” is traverse

On directories:

  • read (r) lets you list names.
  • write (w) lets you create/delete/rename entries.
  • execute (x) lets you traverse the directory and access inodes inside it, if you already know the name.

The classic outage looks like this: the file has 644 and seems readable, but one parent directory is 750 owned by a different group, and the web server can’t traverse it. The browser sees 403. The human sees “but the file is 644.” Both are technically correct, and that’s why this keeps happening.

Symlinks are not magic portals

Symlinks are just pointers. The server needs permissions on the target path, including all parent directories. Also, Apache can be configured to refuse following symlinks unless explicitly allowed.

Why 777 “works” and why it’s a trap

777 gives everyone read/write/execute. For directories, that means any local user (or compromised service) can drop or replace files in your web root. If your web server executes scripts (PHP, CGI, etc.), you’ve effectively installed a “run arbitrary code here” sign.

First short joke: chmod 777 is like leaving your house keys under the doormat—except the doormat is in the town square.

One reliable operations maxim

Paraphrased idea from W. Edwards Deming: Most problems come from the system, not individual effort. Permissions outages usually aren’t because someone is “careless.” They’re because your deployment path, ownership model, and defaults aren’t designed.

Interesting facts and context (why this keeps happening)

  1. Unix permissions predate the web by decades. The model was built for multi-user time-sharing systems, not for serving static files to the planet.
  2. The execute bit on directories is older than most of our careers. It’s “search” or “traverse,” and forgetting it is a rite of passage nobody asked for.
  3. Debian popularized service users like www-data as a packaging convention. The idea was to keep daemons non-privileged and consistent across installs.
  4. Apache’s early security story influenced today’s defaults. Features like Options FollowSymLinks and Require all granted evolved because “serve anything readable” was a bad plan.
  5. POSIX ACLs weren’t always common. They became mainstream in Linux distributions as teams needed finer-grained sharing than a single group could provide.
  6. Umask is a quiet policy engine. It’s not a permission itself; it’s the default subtraction applied at file creation time. It can undo your careful plan while you sleep.
  7. Many “permissions” incidents are actually policy module denials. AppArmor (Ubuntu) and SELinux (some Debian setups, more common elsewhere) can block access even when mode bits look correct.
  8. Web stacks adopted “least privilege” slowly. Running servers as root used to be normal; now it’s a red flag. The permission model in your deployment should match that reality.

Practical tasks: commands, outputs, and decisions

Below are hands-on checks you can run on Debian/Ubuntu. Each task includes: command, what the output means, and the decision you make from it. These are the moves that stop 403s without opening a security crater.

Task 1: Confirm which web server is running and its process user

cr0x@server:~$ ps -eo user,comm,args | egrep 'apache2|nginx' | head
root     nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data nginx: worker process
root     /usr/sbin/apache2 -k start
www-data /usr/sbin/apache2 -k start

What it means: The master may be root, workers should be unprivileged (often www-data). That unprivileged user must read/traverse the web root.

Decision: Use the worker user (www-data here) for all access tests and ownership/group design.

Task 2: Identify the DocumentRoot or web root path actually in use

cr0x@server:~$ apache2ctl -S 2>/dev/null | sed -n '1,25p'
VirtualHost configuration:
*:80                   example.local (/etc/apache2/sites-enabled/000-default.conf:1)
ServerRoot: "/etc/apache2"
Main DocumentRoot: "/var/www/html"

What it means: Apache’s effective vhost config points to a specific DocumentRoot. If you’re editing another directory, you’re debugging the wrong thing.

Decision: Ensure your permissions work on the path Apache/Nginx is actually serving, not the one you wish it served.

Task 3: For Nginx, list server blocks and roots

cr0x@server:~$ sudo nginx -T 2>/dev/null | egrep -n 'server_name|root ' | head -n 12
34:    server_name example.local;
41:    root /srv/www/example/current/public;
78:    server_name static.example.local;
82:    root /srv/www/static;

What it means: Nginx can have multiple roots. 403 on one hostname may be a permissions issue on a different directory tree.

Decision: Target the correct root for the failing vhost.

Task 4: Read the error log for the matching denial

cr0x@server:~$ sudo tail -n 20 /var/log/nginx/error.log
2025/12/30 10:31:42 [error] 19214#19214: *55 open() "/srv/www/example/current/public/index.html" failed (13: Permission denied), client: 10.10.10.23, server: example.local, request: "GET / HTTP/1.1", host: "example.local"

What it means: (13: Permission denied) is filesystem permissions (or MAC policy). This is not an Nginx “deny” directive or missing index file.

Decision: Proceed to filesystem-level checks (mode bits, ownership, ACL, AppArmor/SELinux).

Task 5: Check permissions along the entire path (“namei” is your friend)

cr0x@server:~$ namei -l /srv/www/example/current/public/index.html
f: /srv/www/example/current/public/index.html
drwxr-xr-x root     root     /
drwxr-xr-x root     root     srv
drwxr-x--- deploy   deploy   www
drwxr-xr-x deploy   deploy   example
drwxr-xr-x deploy   deploy   current
drwxr-xr-x deploy   deploy   public
-rw-r--r-- deploy   deploy   index.html

What it means: The problem is visible: /srv/www is drwxr-x--- owned by deploy:deploy. The web server user can’t traverse it because “other” has no execute, and it’s not in the group.

Decision: Fix traverse permissions on the parent directories, ideally via group membership or ACL—not by world-opening everything.

Task 6: Test access as the web server user (the most honest test)

cr0x@server:~$ sudo -u www-data -s -- bash -lc 'test -r /srv/www/example/current/public/index.html && echo READ_OK || echo READ_NO; test -x /srv/www && echo TRAVERSE_OK || echo TRAVERSE_NO'
READ_NO
TRAVERSE_NO

What it means: The worker user can’t traverse /srv/www, so it cannot read anything beneath it.

Decision: Choose a permission strategy (group, ACL, or dedicated web-root ownership) and implement it consistently.

Task 7: Check ownership and groups, including whether www-data is in the expected group

cr0x@server:~$ id www-data
uid=33(www-data) gid=33(www-data) groups=33(www-data)

What it means: www-data has no extra groups. If you plan to use a shared group like web or deploy, you must add it and reload services if needed.

Decision: Decide whether to add www-data to a group (common) or to use ACLs (cleaner when multiple teams share paths).

Task 8: Inspect the directory mode bits and spot missing execute bits

cr0x@server:~$ stat -c '%A %U:%G %n' /srv/www /srv/www/example /srv/www/example/current/public
drwxr-x--- deploy:deploy /srv/www
drwxr-xr-x deploy:deploy /srv/www/example
drwxr-xr-x deploy:deploy /srv/www/example/current/public

What it means: The top-level directory is the choke point. Your web server doesn’t need write there, but it does need execute (traverse) and read on the files it serves.

Decision: Adjust the minimum required: often 751 or group-based 750 depending on your model.

Task 9: Check if ACLs are in play (and whether defaults are set)

cr0x@server:~$ getfacl -p /srv/www/example/current/public | sed -n '1,30p'
# file: /srv/www/example/current/public
# owner: deploy
# group: deploy
user::rwx
group::r-x
other::r-x

What it means: No special ACL entries; only classic bits apply. If you expect an ACL granting www-data, it isn’t there.

Decision: If using ACL strategy, apply it at the correct directory level and set default ACLs so new files inherit access.

Task 10: Detect “umask sabotage” in your deployment user environment

cr0x@server:~$ sudo -u deploy -s -- bash -lc 'umask; touch /tmp/umask-test-file; stat -c "%A %n" /tmp/umask-test-file; rm -f /tmp/umask-test-file'
0027
-rw-r----- /tmp/umask-test-file

What it means: A 0027 umask creates files that are not world-readable and not group-writable. If your web server depends on “other” readability, you’ll get intermittent 403s depending on how files were created.

Decision: Align umask with your permission strategy. For group-sharing: often 0002 is reasonable. For tighter setups: keep restrictive umask but use ACLs.

Task 11: Check for Apache config-level denial that looks like a permissions issue

cr0x@server:~$ sudo apache2ctl -t -D DUMP_RUN_CFG 2>/dev/null | egrep -n 'User|Group|ServerRoot|DocumentRoot' | head
3:User: name="www-data" id=33
4:Group: name="www-data" id=33
16:ServerRoot: "/etc/apache2"
22:DocumentRoot: "/var/www/html"

What it means: You can see the runtime user/group and DocumentRoot. If your vhost points elsewhere, confirm with apache2ctl -S. If the DocumentRoot is correct but you’re serving from /home via an alias, you’ve created a permissions puzzle on purpose.

Decision: If config and filesystem disagree, fix config first. Permissions should match intent, not accidents.

Task 12: For Nginx, check if the worker user is overridden in config

cr0x@server:~$ egrep -n '^\s*user\s+' /etc/nginx/nginx.conf
2:user www-data;

What it means: Confirms which user reads the files. If it’s not www-data, adjust your tests and permission plan accordingly.

Decision: Keep the worker user stable. Changing it to “fix permissions” is usually a smell.

Task 13: Check if the filesystem is mounted with options that can block expected behavior

cr0x@server:~$ findmnt -no TARGET,FSTYPE,OPTIONS /srv
/srv ext4 rw,relatime

What it means: For static content, mount options rarely cause 403, but flags like noexec matter if you serve executable CGI or run build artifacts in-place.

Decision: If you rely on execution under that mount, remove noexec (carefully) or change architecture (better: don’t execute from the web root at all).

Task 14: Check AppArmor status and whether it’s likely involved

cr0x@server:~$ sudo aa-status | sed -n '1,18p'
apparmor module is loaded.
24 profiles are loaded.
12 profiles are in enforce mode.
   /usr/sbin/nginx
   /usr/sbin/apache2

What it means: If the profile is enforcing, it can deny access to paths outside expected locations even with correct Unix permissions. The logs will mention AppArmor denials.

Decision: If your DocumentRoot is unconventional (like /srv/www), verify the AppArmor profile allows it, or move the root to a standard path.

Task 15: Confirm the actual effective permissions of a file including ACL masks

cr0x@server:~$ getfacl -p /srv/www/example/current/public/index.html
# file: /srv/www/example/current/public/index.html
# owner: deploy
# group: deploy
user::rw-
group::r--
other::r--

What it means: This file is world-readable, but that doesn’t help if a parent directory blocks traverse. Also, if you add ACLs later, the mask entry can silently limit effective rights.

Decision: Fix the path permissions first; then ensure files are readable by the intended principal (group or specific user via ACL).

Task 16: Validate that your “fix” didn’t introduce world-writable directories

cr0x@server:~$ sudo find /srv/www/example/current/public -type d -perm -0002 -maxdepth 3 -print | head

What it means: No output means no world-writable directories found (at least within maxdepth). Output would list directories anyone can write to.

Decision: If anything shows up, remove world write and switch to group write or a controlled upload path.

Three corporate mini-stories from the trenches

1) Incident caused by a wrong assumption: “www-data can read /home, right?”

At a mid-sized company, a team spun up a quick internal dashboard. They wanted it “temporary,” which meant it lived under a developer’s home directory because that was fast. Nginx pointed at /home/alex/dashboard/public. It worked in staging, it worked on one production box, and then it didn’t.

The failure was classic: 403 on only one node behind the load balancer. The engineer on call compared Nginx configs, saw they were identical, and started suspecting caching, then DNS, then “maybe the LB is doing something.” The error log had Permission denied, but it was waved off because “the files are 644.”

The difference was one parent directory: on one node, /home/alex was 750. On another, it was 755. The dashboard “worked” only where home directories were world-traversable. Security had recently tightened default home permissions via /etc/adduser.conf, and nobody connected that change to a web app living in someone’s home.

The fix was not to loosen home directory security. The fix was to move the dashboard to /srv/www with a proper group and predictable ownership, then update AppArmor for the new path. After that, the 403 disappeared and stayed gone.

The real lesson: never anchor production web roots to paths designed for humans. Humans log in. Services should not depend on human home directory policy.

2) Optimization that backfired: “Let’s lock it down to 750 everywhere”

A different organization had a security audit that made everyone anxious. Someone proposed “tightening permissions” across web roots by setting directories to 750 and files to 640, owned by deploy:deploy. The idea was sound in the abstract: don’t let “other” read code.

They rolled it out with a recursive chmod in a deployment step. It passed in one environment because their deploy system ran Nginx as deploy (not recommended, but it masked the problem). In production, Nginx ran as www-data, and the deploy step promptly removed “other” traverse bits from a parent directory. The next request returned 403.

Then the panic fix arrived: chmod -R 755 to restore service. That change did bring the site back. It also made every directory world-traversable again, undoing the audit goal, and created a messy delta between nodes because not all boxes were fixed the same way.

The sustainable fix was to implement a shared group model with setgid directories, and to make deploy tooling create artifacts with group-readable permissions. They ended up with 2750/2640-style settings in places, but always with the right group and predictable inheritance.

The lesson: “lock it down” is not a plan. Permission models are plans. Recursive chmod is a bulldozer in a room full of glass.

3) Boring but correct practice that saved the day: “namei in the runbook”

A payments team ran multiple Nginx instances serving static checkout assets. Nothing exciting. Then, a deploy introduced a new directory layer: /srv/www/payments/releases/2025-12-30/public. The app served fine on most nodes, but one returned 403. The deployment looked identical. The on-call engineer felt the familiar dread: intermittent, node-specific, right before peak traffic.

Here’s what made it boring—in a good way. The team had a runbook step: “Run namei -l on the exact file path.” No debate, no guessing. They ran it and instantly saw that /srv/www/payments had the wrong group on that one node, likely from a manual hotfix months earlier.

They corrected the group ownership, ensured the setgid bit was present, and re-ran their “permission drift check” job that compared critical directory modes across the fleet. No recursive chmod, no panic 777, no reinventing the web server user.

Later, in the post-incident review, the most valuable action item wasn’t tooling. It was social: they kept the runbook short, mandatory, and specific. Boring procedures win because they remove the temptation to freestyle at 2 a.m.

Common mistakes: symptoms → root cause → fix

This is the stuff I see repeatedly on Debian/Ubuntu fleets. The symptoms are deceptively similar; the fixes are not.

1) Symptom: 403, error log says “Permission denied (13)”

Root cause: missing execute bit on a parent directory (no traverse).

Fix: ensure every directory in the path is traversable by the web server principal (group or ACL). Use namei -l to find the first blocker.

2) Symptom: Files are 644, directories are 755, still 403

Root cause: AppArmor/SELinux policy denial, or Apache config denies access.

Fix: check kernel audit logs / AppArmor denials and Apache Require rules. Don’t chmod your way around a policy engine.

3) Symptom: 403 only after deploy; fixed by restarting, then returns

Root cause: deploy created new directories/files with restrictive umask or wrong group; old content still had correct perms.

Fix: enforce umask in deploy environment, set setgid on directories, or use default ACLs so new content inherits access.

4) Symptom: 403 on a symlinked path; direct path works

Root cause: Apache disallows following symlinks, or target path permissions differ.

Fix: ensure the config allows symlinks where intended (Options FollowSymLinks or safer alternatives) and that the target path is readable/traversable.

5) Symptom: 403 on directory URL, file URLs work

Root cause: missing index file and autoindex disabled, or missing execute bit on directory.

Fix: add an index file, enable indexing intentionally (rarely), or fix directory permissions.

6) Symptom: 403 only on one node in a cluster

Root cause: permission drift (manual hotfix), different umask, different group membership, or different MAC policy version.

Fix: compare stat/getfacl outputs across nodes, and audit group membership and policy enforcement status.

7) Symptom: Everything works until you add a new team member

Root cause: permissions depend on a specific user owning the tree; no shared group/ACL strategy.

Fix: move to group-based ownership and setgid directories, or ACLs. Avoid “ownership by whoever last deployed.”

8) Symptom: You “fixed” it with 777 and now security is calling

Root cause: permission model missing; emergency fix became policy.

Fix: revert world-writable permissions, implement group/ACL strategy, and separate writable directories (uploads/cache) from read-only code/assets.

Checklists / step-by-step plan

Plan A: Fix a live 403 safely (minimum change, maximum certainty)

  1. Grab the exact failing URL and map it to a filesystem path (from vhost config and logs).
  2. Read the error log line for that request; confirm whether it’s (13: Permission denied) or a config denial.
  3. Run namei -l on the exact file path and identify the first directory lacking traverse for the web server user.
  4. Run a filesystem access test as the web server user with sudo -u www-data to confirm the failure mode.
  5. Apply the smallest permission change needed on the blocking directory (prefer group execute or ACL execute).
  6. Re-test access as the web server user. Then test via HTTP.
  7. Scan for accidental world-writable directories and revert any you find.
  8. Write down what changed and why; if you can’t explain it, you didn’t fix it—you got lucky.

Plan B: Standardize a web root for a team (group model)

  1. Create a dedicated group (web) for “may read web content.”
  2. Add www-data and deploy users to web.
  3. Set ownership of the web root to deploy:web (or root:web for stricter change control).
  4. Set directory modes to 2775 and file modes to 664 where appropriate.
  5. Set deploy umask to 0002 so group readability is consistent.
  6. Split writable directories (uploads, cache) with tighter controls and explicit app configuration.
  7. Add a drift check: a periodic job or config management rule to verify ownership/modes.

Plan C: Standardize with ACLs (when multiple groups are a headache)

  1. Keep ownership with the deploy/app user; don’t fight your org chart in /etc/group.
  2. Apply an ACL giving www-data rx on directories and r on files.
  3. Apply default ACLs on directories so new files inherit the rule.
  4. Verify the ACL mask doesn’t accidentally reduce effective permissions.
  5. Document the ACL decision in your runbook so nobody “cleans it up” later.

Concrete implementation examples (use one, not all)

Example 1: Shared group model under /srv/www/example

cr0x@server:~$ sudo groupadd -f web
cr0x@server:~$ sudo usermod -aG web www-data
cr0x@server:~$ sudo usermod -aG web deploy
cr0x@server:~$ sudo chown -R deploy:web /srv/www/example
cr0x@server:~$ sudo find /srv/www/example -type d -exec chmod 2775 {} +
cr0x@server:~$ sudo find /srv/www/example -type f -exec chmod 0664 {} +

What the output means: These commands are mostly silent on success. The effective change is that directories now inherit group web and are group-traversable.

Decision: If you use this model, ensure deploy processes create files with group readability; otherwise you’ll keep “fixing” perms after every deploy.

Example 2: ACL model to grant www-data read/traverse without changing groups

cr0x@server:~$ sudo setfacl -R -m u:www-data:rx /srv/www/example/current/public
cr0x@server:~$ sudo find /srv/www/example/current/public -type f -exec setfacl -m u:www-data:r {} +
cr0x@server:~$ sudo setfacl -m d:u:www-data:rx /srv/www/example/current/public
cr0x@server:~$ sudo getfacl -p /srv/www/example/current/public | sed -n '1,25p'
# file: /srv/www/example/current/public
# owner: deploy
# group: deploy
user::rwx
user:www-data:r-x
group::r-x
mask::r-x
other::r-x
default:user::rwx
default:user:www-data:r-x
default:group::r-x
default:mask::r-x
default:other::r-x

What it means: The ACL grants www-data traverse/read. The default ACL ensures new directories inherit the same rule.

Decision: Use ACLs when group ownership is politically or operationally unstable. But commit to it; mixed models get weird fast.

Example 3: Fix only the blocking directory traverse bit (surgical live fix)

cr0x@server:~$ sudo chmod o+x /srv/www
cr0x@server:~$ namei -l /srv/www/example/current/public/index.html | sed -n '1,6p'
f: /srv/www/example/current/public/index.html
drwxr-xr-x root     root     /
drwxr-xr-x root     root     srv
drwxr-x--x deploy   deploy   www
drwxr-xr-x deploy   deploy   example

What it means: o+x on the directory allows traverse but not listing. That can be a reasonable compromise if you don’t want the world to enumerate directory contents.

Decision: This is fine for quick restoration, but consider moving to a group/ACL model for long-term sanity.

FAQ

1) Why does a missing execute bit on a directory cause 403?

Because without execute on a directory, the process cannot traverse into it—even if the file inside is readable. The web server can’t reach the inode, so it fails with permission denied.

2) Should I make the web root owned by www-data?

Usually no. If the web server can write its own served content, a compromise turns into persistent modification. Prefer a deploy user owning content, with web server read-only via group or ACL. Make only specific upload/cache directories writable, and keep them out of executable paths.

3) Is chmod 755 on directories always safe?

It’s common and often acceptable for public static content. But it grants world traverse and read. In corporate or multi-tenant systems, you may want group/ACL access instead, especially for source code, templates, or configs that shouldn’t be world-readable.

4) What’s better: groups or ACLs?

Groups are simpler and more visible. ACLs are more precise and avoid dumping everyone into one group. If you have one team and one deploy pipeline, groups are great. If you have multiple teams, CI runners, and shared infrastructure, ACLs keep the peace.

5) Why do 403s show up only after deployments?

New files inherit permissions from the creating process: umask, default ACLs, and parent directory setgid bit matter. Old files might still be accessible, so the problem looks intermittent. Fix the creation policy, not just the current files.

6) How do I check whether it’s AppArmor rather than Unix permissions?

Start with the error log: if it says permission denied but mode bits look fine, check AppArmor status and logs. On Ubuntu, AppArmor is commonly enforcing for web servers. A denial will be recorded by the system audit components and referenced by the profile.

7) Why does “other execute but not read” on directories help?

o+x allows traverse without allowing directory listing. That means someone who doesn’t know filenames can’t easily enumerate them. It’s not a replacement for proper access control, but it can reduce casual disclosure of directory structure.

8) Do I need to restart Apache/Nginx after changing filesystem permissions?

Not for mode bits and ownership changes; the kernel enforces them immediately. You may need to restart if you changed group memberships for the running worker user—processes don’t automatically pick up new supplementary groups.

9) Why does adding www-data to a group not work immediately?

Because existing worker processes have their group list set at start time. After changing group memberships, reload/restart the service so new workers inherit the updated group list.

10) What’s the safest layout for writable paths like uploads?

Put writable directories outside the code/static asset tree, give the application exactly the write permissions it needs, and ensure the web server doesn’t execute content from those paths. If you must serve uploads, serve them as static, not executable.

Conclusion: next steps that won’t bite you later

403s from permissions aren’t mysterious. They’re deterministic: somewhere in the path, the web server user can’t traverse or read. Fixing that doesn’t require 777. It requires an ownership model you can explain to a sleep-deprived coworker at 3 a.m.

Do this next:

  1. Add namei -l and “test as www-data” to your runbook. Make it non-optional.
  2. Pick a permission pattern (shared group with setgid, or ACLs) and implement it for every site. Consistency is the security feature.
  3. Separate writable paths from the web root. Your future incident response self will send a thank-you note.
  4. Audit for world-writable directories and remove them. If something breaks, that’s not a reason to give up—it’s proof you needed the audit.

If you treat permissions as architecture instead of cleanup, 403s become a quick fix, not a recurring character in your on-call rotation.

← Previous
Debian 13: Disk writeback storms — tune vm.dirty settings without data risk (case #45)
Next →
Debian 13: Fix “Too many redirects” in Nginx by correcting canonical and HTTPS loops (case #71)

Leave a comment