You deploy. You reload. You hit refresh. Instead of your site you get a crisp, unhelpful 403 or 404. Yesterday it worked. Today Nginx is acting like it’s never heard of your files.
On Debian 13, the fastest path out is not “stare at the config until your eyes dry out.” It’s to prove whether you’re dealing with a permissions wall, a routing/config mismatch, or a safety mechanism (AppArmor, symlinks, chroot-ish setups) doing its job. This piece is the pragmatic workflow I use on production boxes: minimal guessing, maximum signal.
Fast diagnosis playbook
This is the triage order that wins under pressure. It’s biased toward answering the only question that matters first: “Is Nginx allowed to read the file it’s trying to serve, and is it even trying to serve the file I think it is?”
First: read the error log for the exact request
Don’t start by editing configuration. Start by watching what Nginx says at the moment of failure. For 403/404, the error log usually contains the truth in one line: the resolved path, the failure reason, and sometimes which server block caught the request.
Second: confirm which server block is handling the request
Half of “sudden 404” incidents are actually “you’re on the wrong virtual host.” Debian changes, certificate renewals, added server blocks, or a new default site can quietly steal traffic.
Third: validate path resolution (root/alias/try_files)
403/404 often comes down to: the URI you requested does not map to the file you think it maps to. Nginx has a very literal mapping model. If you are wrong by one slash, you’re wrong.
Fourth: test filesystem permissions across the entire path
Nginx doesn’t only need read permission on the file; it needs execute (“traverse”) permission on every directory in the path. One tight directory in the middle yields “permission denied” even if the file itself is world-readable.
Fifth: check LSM policy (AppArmor on Debian is common)
On Debian, AppArmor profiles can deny reads in a way that looks like ordinary Unix permissions or “file not found.” Your logs will tell you if you listen.
Sixth: verify you didn’t reload a config that doesn’t match the running process
Systemd can show “active (running)” while Nginx is serving an old config because a reload failed, or because you reloaded the wrong instance/container/chroot. Validate the loaded config quickly.
Operational rule: if you can’t reproduce it with curl -v while tailing access+error logs, you’re debugging a rumor.
The instant mental model: what 403 vs 404 really means in Nginx
Nginx is deterministic. If you think it’s random, you just haven’t found the input that makes it behave the way it does.
403 Forbidden: Nginx found the “place,” but won’t serve the thing
403 is commonly one of these:
- File exists but unreadable by the Nginx worker user.
- Directory traversal blocked (no execute bit on a parent directory).
- Directory index forbidden: you requested a directory and Nginx can’t find an index file and autoindex is off.
- Explicit deny rules in config (e.g.,
deny all;, IP allowlists). - Symlink policy:
disable_symlinksor odd mount options can cause denial. - AppArmor denial.
404 Not Found: Nginx couldn’t map the URI to a file (or chose not to reveal it)
404 often means “wrong root/alias” or “wrong server block,” but it can also be a deliberate disguise: some configs map access-denied situations to 404 to avoid leaking existence of sensitive files.
Why you can’t rely on the status code alone
Nginx can be configured to return 404 for forbidden content, or to rewrite to an internal location that returns something else. So the play is: status code is a hint; logs are the evidence.
Joke #1: When an on-call says “it’s just a 404,” I hear “it’s just a fire, but it’s in the logs.”
Interesting facts and historical context (useful, not trivia)
- Nginx was built for predictable concurrency (event-driven model) where a misrouted request can fail very consistently at scale—great for debugging if you look at one representative request.
- 403 vs 404 has security history: many orgs intentionally return 404 for protected paths to reduce endpoint enumeration.
- The Unix “execute” bit on directories means “may traverse,” not “may execute.” A directory can be readable but not traversable, which confuses even seasoned developers.
- Debian’s packaging defaults favor safety: the standard site layout under
/etc/nginx/sites-availableandsites-enabledis designed to reduce accidental exposure, but it also creates “wrong vhost” incidents during changes. - AppArmor landed as a mainstream control in many distros to provide mandatory access control without the operational lift of SELinux, and it can quietly block paths outside expected web roots.
- Alias vs root has been a long-running footgun: Nginx’s
aliasbehaves differently thanrootin location blocks, and one missing slash can turn a valid URI into a guaranteed 404. - Nginx “internal” locations are widely used for auth and error handling; they can make your browser see 404 while Nginx is actually hitting a permission failure elsewhere.
- Default server selection rules matter: if no
server_namematches, Nginx picks a default. Adding a new server block can change who becomes “default” and cause “sudden” 404s.
Practical tasks: commands, what the output means, and what you decide next
These are real tasks I run in production. Copy/paste friendly. Each includes: command, what to look for, and the decision it triggers.
Task 1: Reproduce with curl and capture headers
cr0x@server:~$ curl -sv -o /dev/null http://example.internal/static/app.css
* Trying 127.0.0.1:80...
* Connected to example.internal (127.0.0.1) port 80 (#0)
> GET /static/app.css HTTP/1.1
> Host: example.internal
> User-Agent: curl/8.6.0
> Accept: */*
< HTTP/1.1 404 Not Found
< Server: nginx/1.26.2
< Date: Mon, 29 Dec 2025 10:12:32 GMT
< Content-Type: text/html
< Content-Length: 153
< Connection: keep-alive
Meaning: you confirmed it’s Nginx responding, not a CDN or upstream, and you have the exact URI and Host header.
Decision: keep the exact Host value; you’ll use it to identify the server block.
Task 2: Tail error log while reproducing
cr0x@server:~$ sudo tail -Fn0 /var/log/nginx/error.log
2025/12/29 10:12:32 [error] 1842#1842: *921 open() "/srv/www/example/static/app.css" failed (2: No such file or directory), client: 127.0.0.1, server: example.internal, request: "GET /static/app.css HTTP/1.1", host: "example.internal"
Meaning: Nginx tried to open a concrete file path. Error code (2) is “No such file or directory.” That’s not permissions; it’s path mapping or missing file.
Decision: verify the file exists at that exact path and confirm root/alias logic for /static/.
Task 3: Tail access log to confirm which status code Nginx actually wrote
cr0x@server:~$ sudo tail -n 3 /var/log/nginx/access.log
127.0.0.1 - - [29/Dec/2025:10:12:32 +0000] "GET /static/app.css HTTP/1.1" 404 153 "-" "curl/8.6.0"
Meaning: confirms it’s not a browser cache artifact or mixed status codes. One request, one code.
Decision: continue on Nginx+filesystem, not on upstream app.
Task 4: Check Nginx config syntax before deeper investigation
cr0x@server:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Meaning: you’re not chasing a broken reload that never applied.
Decision: proceed to “which server block and which root” rather than “is Nginx even parsing.”
Task 5: Dump the full loaded configuration (the “what is Nginx actually running?” step)
cr0x@server:~$ sudo nginx -T 2>/dev/null | sed -n '1,120p'
# configuration file /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Meaning: you see includes and key defaults, especially the worker user (www-data on Debian by default).
Decision: confirm the site file you edited is in sites-enabled and not shadowed by another include.
Task 6: Identify which server block matches your Host header
cr0x@server:~$ sudo nginx -T 2>/dev/null | awk '
$1=="server" && $2=="{" {inserver=1; sn=""; ls=""; fn=FILENAME}
inserver && $1=="server_name" {sn=$0}
inserver && $1=="listen" {ls=$0}
inserver && $1=="}" {print ls " | " sn; inserver=0}
'
listen 80; | server_name example.internal;
listen 80 default_server; | server_name _;
Meaning: you can see whether your host is explicitly matched, or falling into the default server.
Decision: if you’re hitting server_name _; or a default server unexpectedly, fix vhost ordering / default_server, and stop touching permissions.
Task 7: Validate the resolved file path exists
cr0x@server:~$ sudo ls -la /srv/www/example/static/app.css
ls: cannot access '/srv/www/example/static/app.css': No such file or directory
Meaning: the file genuinely doesn’t exist at the path Nginx tried.
Decision: find the correct web root, fix root/alias/try_files, or fix your deployment that didn’t ship the asset.
Task 8: If the file exists, test read access as the Nginx user
cr0x@server:~$ sudo -u www-data head -c 64 /srv/www/example/static/app.css
head: cannot open '/srv/www/example/static/app.css' for reading: Permission denied
Meaning: classic permission issue. You now have proof with the same user Nginx runs as.
Decision: fix ownership/permissions/ACLs along the directory chain. Don’t “chmod 777” your way into a breach report.
Task 9: Check directory traversal bits along the path
cr0x@server:~$ namei -l /srv/www/example/static/app.css
f: /srv/www/example/static/app.css
drwxr-xr-x root root /
drwxr-xr-x root root srv
drwx------ root root www
drwxr-xr-x root root example
drwxr-xr-x root root static
-rw-r--r-- root root app.css
Meaning: /srv/www is drwx------. Even if the file is readable, www-data cannot traverse that directory, so Nginx will fail.
Decision: adjust directory permissions (execute bit for the Nginx user/group) or move content into a web root that’s meant to be served.
Task 10: Spot AppArmor denials quickly
cr0x@server:~$ sudo journalctl -k -g apparmor --since "10 minutes ago"
Dec 29 10:11:58 server kernel: audit: type=1400 audit(1767003118.123:91): apparmor="DENIED" operation="open" profile="nginx" name="/srv/www/example/static/app.css" pid=1842 comm="nginx" requested_mask="r" denied_mask="r" fsuid=33 ouid=0
Meaning: mandatory access control blocked the read. Unix permissions might be fine, but policy says “no.”
Decision: either adjust the profile to allow that path, or serve files from allowed locations. Changing chmod won’t fix this.
Task 11: Confirm the running master/worker user and processes
cr0x@server:~$ ps -o user,pid,cmd -C nginx
USER PID CMD
root 1721 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 1842 nginx: worker process
www-data 1843 nginx: worker process
Meaning: workers run as www-data. That’s the identity that must read your content.
Decision: stop guessing about “which user.” Fix access for www-data (or for the configured user if you changed it).
Task 12: Detect whether the request is being rewritten to something else
cr0x@server:~$ sudo nginx -T 2>/dev/null | grep -R --line-number -E 'try_files|rewrite|return 404|error_page 404|internal' /etc/nginx/sites-enabled
/etc/nginx/sites-enabled/example.conf:27: try_files $uri $uri/ /index.html;
/etc/nginx/sites-enabled/example.conf:41: error_page 403 404 = /errors/notfound.html;
/etc/nginx/sites-enabled/example.conf:42: location = /errors/notfound.html { internal; }
Meaning: even a 403 could be mapped to an internal 404 page, and try_files can route missing static files to an SPA entrypoint. Your browser might see 404 while Nginx is doing exactly what you told it.
Decision: decide if the rewrite behavior is intentional; if not, adjust try_files or error_page mapping.
Task 13: Check whether you’re hitting the “wrong” default site
cr0x@server:~$ ls -l /etc/nginx/sites-enabled
total 0
lrwxrwxrwx 1 root root 34 Dec 29 09:48 default -> /etc/nginx/sites-available/default
lrwxrwxrwx 1 root root 34 Dec 29 09:49 example.conf -> /etc/nginx/sites-available/example.conf
Meaning: Debian’s default site is enabled. If it has default_server on your listen directive, it can catch unmatched hosts and return the wrong content (or 404).
Decision: either disable the default site in production, or make your intended vhost the default explicitly.
Task 14: Verify the filesystem mount options aren’t sabotaging you
cr0x@server:~$ findmnt -no SOURCE,TARGET,FSTYPE,OPTIONS /srv
/dev/mapper/vg0-srv /srv ext4 rw,relatime
Meaning: mount options usually aren’t the culprit for 403/404, but they can matter (e.g., weird read-only state, bind mounts, or overlay behavior in containers).
Decision: if you see unexpected ro or bind/overlay mounts, confirm deployment paths and container mounts match your expectations.
Task 15: Confirm index handling when a directory is requested
cr0x@server:~$ curl -svo /dev/null http://example.internal/static/
* Trying 127.0.0.1:80...
> GET /static/ HTTP/1.1
> Host: example.internal
< HTTP/1.1 403 Forbidden
< Server: nginx/1.26.2
Meaning: requesting a directory can produce 403 when no index file exists and autoindex is off.
Decision: add an index (e.g., index.html), enable autoindex (rarely correct for production), or change routing to avoid directory URIs.
Task 16: Confirm the deploy didn’t change file ownership unexpectedly
cr0x@server:~$ sudo stat -c '%U %G %a %n' /srv/www/example /srv/www/example/static /srv/www/example/static/app.css
root root 755 /srv/www/example
root root 750 /srv/www/example/static
deploy deploy 640 /srv/www/example/static/app.css
Meaning: the file is owned by deploy with mode 640. If www-data is not in group deploy, it cannot read it.
Decision: fix ownership/group strategy (common: group-readable content + setgid directories), or use ACLs for www-data.
Task 17: Check for symlink-related denial
cr0x@server:~$ sudo nginx -T 2>/dev/null | grep -R --line-number 'disable_symlinks' /etc/nginx
/etc/nginx/nginx.conf:63: disable_symlinks if_not_owner from=$document_root;
Meaning: if your content uses symlinks (common in deploys), Nginx may refuse to serve them depending on ownership.
Decision: either align ownership, remove symlinks for served content, or adjust the policy intentionally (with eyes open).
Task 18: Confirm reload actually happened and didn’t fail silently
cr0x@server:~$ sudo systemctl reload nginx; sudo systemctl status nginx --no-pager -l
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; preset: enabled)
Active: active (running) since Mon 2025-12-29 09:40:10 UTC; 33min ago
Docs: man:nginx(8)
Main PID: 1721 (nginx)
Tasks: 5 (limit: 18754)
Memory: 8.4M
CPU: 1.142s
CGroup: /system.slice/nginx.service
├─1721 "nginx: master process /usr/sbin/nginx -g daemon on; master_process on;"
├─1842 "nginx: worker process"
└─1843 "nginx: worker process"
Meaning: status shows Nginx is running, but doesn’t prove the reload applied. Pair it with nginx -T and log timestamps when making changes.
Decision: if you suspect reload failure, check journald entries for reload errors and do nginx -t again.
Permissions failures that look like config bugs
The directory traverse trap (the one that bites adults)
You can set chmod 644 on a file all day. If any parent directory is missing the execute bit for the Nginx user (or its group), Nginx can’t get there. The result is a 403, and the error log will often say (13: Permission denied).
On Debian, Nginx typically runs as www-data. So the canonical test is: can www-data read the file? Not “can root read it.” Root can read your diary; Nginx can’t.
Ownership drift during deploys
CI/CD pipelines that rsync files, unpack tarballs, or switch symlinks can change ownership and modes. A deploy user drops files as deploy:deploy with 640, and suddenly static assets are dead.
Fix it at the source: set a consistent group for served content, enforce umask, or apply ACLs. “Fix permissions after each deploy” is not a strategy; it’s a recurring incident.
AppArmor: permissions you didn’t know you had
Debian commonly uses AppArmor. If your Nginx profile allows /var/www/** and you serve from /srv/www/**, Nginx can log “permission denied” even if the Unix mode bits are perfect.
Kernel audit logs will name the profile and the path. That’s your smoking gun. If you don’t check, you’ll spend hours “fixing” chmod and achieving nothing.
Symlinks, ownership, and “disable_symlinks”
Symlinks are popular for atomic deploys: current -> releases/2025-12-29. Nginx can be configured to restrict symlink serving to prevent a class of path traversal issues and mis-ownership surprises. That’s a good control—until someone forgets it exists.
If you see symlink-related errors, decide whether you want Nginx to serve symlinks at all. If yes, make it consistent and explicit; if no, make your deploy stop using symlinks under the document root.
Config failures that look like permissions
Wrong server block (default server roulette)
If the Host header doesn’t match any server_name, Nginx selects a default. That default might have a different root, might deny everything, or might point to an empty directory. Result: 404 or 403 “suddenly.”
The fix is boring: make your server_name match reality, and control default_server intentionally. Don’t let “whatever file sorts first” define production behavior.
Alias vs root: similar words, different physics
This one deserves bluntness: if you use alias without understanding it, you will eventually ship a 404.
root appends the URI (or part of it, depending on location) to a base path. alias replaces the location prefix with the alias path. That means the presence or absence of a trailing slash can change the resulting path. Nginx is not “smart” here; it’s consistent.
try_files: the silent rerouter
try_files is fantastic: it lets you serve static assets when present and fall back to an app route when not. It’s also a machine for creating confusing 404s when you route missing files to a fallback that itself is missing or forbidden.
When debugging, locate the exact try_files path sequence and verify the fallback exists and is readable.
Index handling: 403 that is not “permissions”
A directory request like /static/ can return 403 because Nginx forbids listing directories unless you enable autoindex, and because no index file exists. This is not the OS denying access; it’s Nginx refusing to provide a directory listing.
Custom error_page masking
Corporate security teams love mapping 403 to 404. Sometimes they’re right. But it complicates on-call life. If you inherit a config, search for error_page and internal locations before you decide what the status code “means.”
Joke #2: “We mapped 403 to 404 for security” is like repainting your check-engine light—technically effective until the engine explodes.
Common mistakes: symptoms → root cause → fix
1) Symptom: 404 for every path on a known domain
Root cause: wrong server block is catching requests (Host mismatch; default_server changed; new vhost added).
Fix: verify Host with curl, then confirm matching server_name. Make intended vhost explicit; disable Debian default site if it’s not needed.
2) Symptom: 403 on directories, 200 on known files
Root cause: directory requested without index file; autoindex disabled.
Fix: add index index.html; and ensure the file exists, or avoid linking to directory URIs, or intentionally enable autoindex on; (rarely correct).
3) Symptom: 403 on specific static files after deploy
Root cause: ownership/mode drift: files created as 640 by deploy user; group doesn’t include www-data.
Fix: enforce consistent ownership, e.g., root:www-data with 644, or setgid directories with group-readable files, or ACLs for www-data.
4) Symptom: 404, but error log shows “permission denied”
Root cause: config masks forbidden as not found via error_page mapping or internal rewrites.
Fix: remove masking while debugging; inspect error_page and rewrite rules; fix the underlying permission/LSM issue.
5) Symptom: 404 on assets under /static, but file exists elsewhere
Root cause: alias used with wrong trailing slash, or root declared at the wrong level (server vs location).
Fix: compute the resolved path. Prefer consistent patterns; validate with error log open() path.
6) Symptom: Everything works as root when you “test,” but Nginx still 403s
Root cause: you tested as root, not as www-data, and you missed directory traverse bits or ACLs.
Fix: always test reads using sudo -u www-data and namei -l.
7) Symptom: Permissions look correct, still 403/404
Root cause: AppArmor denies access to the path (often content moved to /srv, /data, or a bind mount).
Fix: confirm denial in kernel logs; update AppArmor profile or relocate content into allowed paths.
8) Symptom: Random 404 after adding a new site
Root cause: new server block becomes default for a listener; or overlapping server_name and listen blocks cause ambiguous matching.
Fix: set exactly one default per listen socket; validate with nginx -T and targeted curl Host headers.
Checklists / step-by-step plan
Step-by-step: diagnose a sudden 403
- Reproduce with curl using the correct Host header. Capture status and server header.
- Tail error.log while reproducing; look for
(13: Permission denied),directory index of ... is forbidden, or explicitaccess forbidden by rule. - Confirm server block (server_name match, default_server behavior).
- Compute the target file path from log line. Don’t guess.
- Test read as www-data and run
namei -lto catch a locked directory. - Check AppArmor denials in kernel logs.
- Fix the smallest thing (one directory mode, one ACL, one root line), reload, re-test.
Step-by-step: diagnose a sudden 404
- Confirm it’s Nginx and not upstream by checking headers and access log.
- Find the open() path in error.log. If it’s
(2: No such file or directory), you’re in mapping/deploy territory. - Verify the file exists at that exact path.
- If file exists elsewhere, review
root/aliasandlocationmatching; checktry_files. - Confirm you’re in the intended vhost and not a default catchall.
- Search for masking rules that turn forbidden into not found.
- Reload with validation (
nginx -tthen reload) and re-test.
Operational checklist: harden against repeats
- Disable the Debian default site on production hosts unless you really want it.
- Log the resolved path (error.log already does; keep it enabled at a reasonable level).
- Make ownership and permissions part of the deploy artifact, not a post-step.
- Keep served content under a small number of known roots and align AppArmor policy to those.
- Use
nginx -tin CI and before reloads; fail fast. - When using symlink deploys, decide on
disable_symlinkspolicy intentionally and document it.
Three corporate-world mini-stories (all anonymized, all plausible)
Mini-story #1 (wrong assumption): “403 means our WAF is blocking it”
The symptom looked clean: a burst of 403s on a static assets path right after a minor release. The on-call engineer assumed it was the edge/WAF layer, because the 403 page didn’t match the usual Nginx error page. Everyone sprinted to the security dashboard.
Meanwhile, the error log on the origin had a different story: open() ".../app.css" failed (13: Permission denied). The deploy job had switched from packaging files as root:www-data to packaging them as deploy:deploy with a restrictive umask. The content landed as 640, unreadable to www-data.
The “WAF theory” lasted because people trusted the HTML response body more than the server logs. But the body was coming from a custom error_page mapping that returned a branded page for both 403 and 404. The status code was real; the page was decoration.
The fix was boring and durable: make the build produce correct ownership and mode in the artifact, enforce it in the deploy step, and add a smoke test that reads a known file as www-data before marking the deploy healthy.
The lesson wasn’t “don’t blame the WAF.” It was: stop treating status pages as evidence. Logs are evidence.
Mini-story #2 (optimization that backfired): “Let’s harden symlinks for security”
A platform team tightened Nginx with disable_symlinks if_not_owner from=$document_root;. The intent was reasonable: prevent a class of symlink tricks and reduce blast radius if a developer accidentally points to something sensitive.
Then a product team rolled out an atomic deploy pattern using symlinks under the document root: /srv/www/app/current pointed to a new release directory created by the deploy user. Ownership differed between the symlink target and the document root, because the release dirs were created by CI jobs with a different UID mapping.
Result: intermittent 403s depending on which assets were being resolved through symlinks, and which files happened to be owned by which user after the build step. The errors were correct. The configuration was correct. The system design was not aligned with the policy.
The rollback fixed it fast, but it was a wake-up call: “hardening toggles” aren’t free. If you enable a security control that changes filesystem semantics, you must validate it against your deployment model. Security and reliability are not enemies, but they do require coordination.
Mini-story #3 (boring but correct practice that saved the day): “We always tail logs during the repro”
A team had a simple rule: when you can reproduce a web failure, you reproduce it from the server with curl while tailing logs. No exceptions, no debating. It wasn’t a heroic culture; it was a time-saving culture.
One morning, a Debian 13 host started returning 404 for a single domain while other domains on the same Nginx instance were fine. The knee-jerk assumption was “someone deleted files” or “rsync failed.” The on-call did the ritual anyway: curl + tail access/error logs.
The error log showed Nginx looking under the wrong root entirely, and the server field in the error line didn’t match the intended domain. That led straight to the real issue: a new vhost with listen 80 default_server; had been deployed as part of another team’s change, silently stealing unmatched hosts.
The fix took minutes: remove default_server from the unintended vhost, reload, verify host routing with curl. No file restores. No permission churn. No dramatic war room.
The “boring practice” wasn’t tailing logs. It was agreeing that evidence beats theory, and making that agreement operational.
How to tell instantly: the evidence hierarchy
If you want a single takeaway, it’s this hierarchy—use it to keep yourself honest:
- Error log line for the request (contains the resolved path and kernel error code).
- Access log entry (confirms host, URI, status code, time).
- Repro from the host with curl (removes DNS and edge variability).
- Filesystem test as www-data (proves actual permissions).
- LSM audit logs (proves mandatory policy denials).
- Only then: configuration review and refactoring.
One quote I keep in mind when incidents get noisy:
paraphrased idea — W. Edwards Deming: Without data, you’re just another person with an opinion.
That’s basically on-call in one sentence.
FAQ
1) “If it’s a 404, it can’t be permissions, right?”
Wrong. It’s often mapping, but configs can intentionally convert 403 to 404 (error_page or internal rewrites). Always check error.log for (13) vs (2).
2) “Where is the Nginx error log on Debian 13?”
Typically /var/log/nginx/error.log, unless overridden by error_log in /etc/nginx/nginx.conf or a site file. Confirm with nginx -T.
3) “Why do I get 403 for a directory but 200 for files inside it?”
Because requesting the directory triggers index handling. If there’s no index file and autoindex is off, Nginx returns 403 (“directory index … is forbidden”). That’s not OS permissions.
4) “What’s the fastest way to confirm a vhost mismatch?”
Use curl -sv with the intended Host header, then inspect the server: field in the error log line (it often prints the server_name it matched). Also dump config with nginx -T and search server blocks.
5) “I changed permissions and it still fails. What now?”
Check AppArmor denials in kernel logs. If you see apparmor="DENIED" for nginx, chmod won’t help. Either adjust the profile or serve from permitted paths.
6) “Should I run Nginx workers as a different user than www-data?”
Only if you have a clear isolation goal and operational discipline. Changing the worker user without fixing ownership and ACL strategy is a great way to manufacture 403s.
7) “Why does namei matter if I already checked file permissions?”
Because directories need execute permission for traversal. namei -l shows permissions on each path segment, so you can spot the one locked directory that breaks everything.
8) “How do I avoid alias/root mistakes for static files?”
Pick a convention and stick to it. If you use alias, be meticulous about trailing slashes and confirm the resolved path via error log. If you can use root cleanly, it’s often simpler.
9) “Can a failed reload keep Nginx serving old config?”
Yes. A reload can fail due to syntax or permission issues. Always run nginx -t before reload, and verify with nginx -T when the stakes are high.
10) “Why would a deploy cause 404 instead of 403?”
If assets weren’t shipped to the expected path (build step changed, rsync excludes, wrong artifact), Nginx will literally not find them: (2: No such file or directory). That’s a 404, and it’s your deploy pipeline talking.
Conclusion: next steps you can actually do
If your Debian 13 Nginx suddenly returns 403/404, don’t negotiate with your assumptions. Do the fast playbook:
- Reproduce with
curl -svusing the real Host header. - Tail
/var/log/nginx/error.logand capture the exactopen()line (path + errno). - Confirm vhost selection (
nginx -Tand server_name/default_server). - Test filesystem access as
www-dataand verify directory traversal withnamei -l. - If it still doesn’t add up, check AppArmor denials in kernel logs.
Then fix one thing, reload, and re-test. The best incident response is the one that leaves behind a guardrail: a deploy permission policy, a vhost sanity check, or an AppArmor rule that matches your actual file layout.