You deploy a clean Debian 13 box, drop in Nginx, add a “simple” HTTP→HTTPS redirect, and suddenly the browser hits you with
“Too many redirects”. You clear cookies. You try incognito. You blame the browser. Then you blame Nginx.
Everyone is innocent except your configuration.
Redirect loops are rarely “an Nginx bug”. They’re almost always a disagreement about what the canonical URL should be
(host, scheme, or both), amplified by a reverse proxy or CDN that lies—sometimes politely—about whether the request was HTTPS.
Let’s fix it properly, with evidence, not cargo-cult snippets.
What “Too many redirects” actually means
Your browser (or client) requested a URL. The server responded with a redirect (3xx) to another URL. The client followed it.
That server responded with another redirect. Repeat until the client hits its maximum redirect limit and gives up.
In Nginx land, loops tend to be:
- Scheme loops: HTTP→HTTPS→HTTP→… because something upstream/downstream disagrees about the scheme.
- Host loops: example.com→www.example.com→example.com→… because multiple layers “canonicalize” differently.
- Port/scheme mixups: redirecting to https://host:80 or http://host:443 accidentally, often via variables.
- Application-level redirects: app redirects to HTTPS, Nginx redirects to HTTP (or different host), they fight.
- Cookie/HSTS-induced “sticky” behavior: your tests differ because your browser cached a policy or cookie.
The fix is not “add more if statements” until it stops. The fix is to define exactly one authority for canonical host and scheme,
then make every other layer follow it. Your stack should have one opinion, not a committee.
Joke #1: Redirect loops are like meetings about meetings—everyone keeps “moving it forward” and somehow nothing ever arrives.
Redirect facts and history that matter in production
Some short context points that seem academic until they bite you on a Tuesday at 02:00:
- HTTP redirects predate “modern web” UX. The 301/302 codes were in early HTTP specs; caches and clients still treat them differently today.
- 301 can be cached aggressively. Browsers and intermediaries may cache a 301 longer than you expect, so “I fixed it” might not propagate to your own laptop.
- 302 historically meant “temporary”, but clients got creative. That mess is why 307 and 308 exist: they preserve the method semantics more predictably.
- HSTS changes the game. Once a browser learns “always use HTTPS for this host”, it may never even attempt HTTP again, confusing your debugging.
- Reverse proxies made “is this HTTPS?” ambiguous. If TLS terminates upstream, your Nginx sees plain HTTP and you must rely on headers or PROXY protocol.
- The Host header is both powerful and dangerous. It enables virtual hosting, but trusting it blindly can create open redirects or canonical chaos.
- CDNs introduced “Flexible SSL” modes. Those modes talk HTTPS to the browser but HTTP to your origin—prime territory for HTTP→HTTPS loops.
- Default server selection is a silent footgun. Nginx will pick a “default_server” if nothing matches; a redirect there can hijack unrelated hosts.
- Canonicalization is not just SEO. It affects cookies (domain scoping), CORS, OAuth redirect URIs, and session stickiness.
One quote, because it holds up in operations: Hope is not a strategy.
— Gordon R. Sullivan
Fast diagnosis playbook (check these first)
When someone pings “site down, too many redirects”, don’t start by staring at the Nginx config like it owes you money.
Start by answering three questions quickly and with proof.
1) Is the loop host-based, scheme-based, or app-based?
- Use
curl -ILto view the redirect chain and see what changes each hop. - If it alternates between http/https: scheme problem (or proxy header problem).
- If it alternates between www/non-www: canonical host conflict (Nginx vs app vs CDN).
- If it stays on the same URL but keeps 301/302: app might be redirecting to itself based on headers/cookies.
2) Where is TLS terminated?
- If TLS terminates at Nginx: use
$schemeand listen 443 with certs. - If TLS terminates at a load balancer/CDN: Nginx sees HTTP, so redirects must be based on
X-Forwarded-Protoor PROXY protocol. - If you’re unsure: check
ss -lntpand your upstream platform config.
3) Who owns canonicalization?
- Pick exactly one: Nginx edge, CDN, or application.
- If both Nginx and app redirect to “canonical”, you’ll eventually create a loop with one minor mismatch (port, host, trailing slash, scheme).
Do those three and you’ll stop guessing. Most loops collapse into a single bad “if ($scheme = http)” behind a proxy, or dueling www rules.
Establish ground truth: who terminates TLS and who decides canonical
On Debian 13, Nginx behaves the same as on Debian 12, but your environment probably doesn’t.
“Debian 13” is usually shorthand for “new VM, new defaults, new LB behavior, new cert automation, new copy-pasted config”.
Decide your canonical URL policy (write it down)
Pick one canonical host and one canonical scheme. Examples:
- https://example.com (no www) is canonical; all http and www redirect there.
- https://www.example.com is canonical; all non-www redirect there, and all http redirects to https.
Don’t make “both work” via separate stacks of redirects. Make them arrive at one.
Decide where redirects live
My strong opinion: perform canonical redirects at the edge you control most reliably. For many Debian/Nginx deployments,
that’s Nginx itself. For heavy CDN setups, the CDN can do it—if you’re disciplined and remove the same logic from Nginx/app.
If the application framework also forces HTTPS (common in Django, Rails, Laravel, Spring), you either:
- Disable that in the app and let Nginx do it, or
- Let the app handle it and ensure Nginx doesn’t fight it (no conflicting rewrites, correct forwarded headers).
Practical tasks (commands, outputs, decisions)
You’ll fix this faster by collecting signals. Below are practical tasks you can run on Debian 13.
Each task includes: command, sample output, what it means, and what decision to make.
Task 1: Confirm Nginx is running and which config is loaded
cr0x@server:~$ systemctl status nginx --no-pager
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; preset: enabled)
Active: active (running) since Mon 2025-12-30 09:10:18 UTC; 2h 3min ago
Docs: man:nginx(8)
Main PID: 1642 (nginx)
Tasks: 3 (limit: 18988)
Memory: 7.4M
CPU: 1.231s
CGroup: /system.slice/nginx.service
├─1642 "nginx: master process /usr/sbin/nginx -g daemon on; master_process on;"
├─1643 "nginx: worker process"
└─1644 "nginx: worker process"
Meaning: Nginx is active. You’re not debugging a dead service.
Decision: Proceed to config inspection; if it’s not active, fix service start errors first.
Task 2: Validate config syntax (don’t skip this)
cr0x@server:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Meaning: Syntax is fine. Redirect loops are logical, not parse-time.
Decision: Move to “effective config” and redirect chain.
Task 3: Dump the effective config and find the redirect rules actually applied
cr0x@server:~$ sudo nginx -T 2>/dev/null | sed -n '1,140p'
# configuration file /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events { worker_connections 768; }
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Meaning: Nginx includes both conf.d and sites-enabled.
Decision: Search the full dump for return 301, rewrite, $scheme, and X-Forwarded-Proto.
Task 4: Locate all redirect statements and the server blocks they live in
cr0x@server:~$ sudo nginx -T 2>/dev/null | egrep -n 'server_name|listen|return 30|rewrite|X-Forwarded-Proto|proxy_redirect'
58: server {
59: listen 80 default_server;
60: server_name _;
61: return 301 https://$host$request_uri;
84: server {
85: listen 443 ssl http2;
86: server_name example.com www.example.com;
101: if ($host = www.example.com) { return 301 https://example.com$request_uri; }
129: proxy_set_header X-Forwarded-Proto $scheme;
Meaning: There’s a port 80 default redirect and a host canonical redirect on 443.
Decision: This can still be correct—but if you’re behind a proxy that speaks HTTP to Nginx, $scheme will be http and can trigger loops.
Task 5: Observe the redirect chain from the server itself (bypass your browser)
cr0x@server:~$ curl -IL http://example.com/
HTTP/1.1 301 Moved Permanently
Server: nginx
Location: https://example.com/
HTTP/2 301
server: nginx
location: https://example.com/
HTTP/2 301
server: nginx
location: https://example.com/
Meaning: Same target repeated: https://example.com/ redirects to itself. That’s not “HTTP→HTTPS”. That’s a logic condition firing on HTTPS requests.
Decision: Inspect the 443 server block for an unconditional redirect (often via a mistaken if, a variable, or app upstream behavior).
Task 6: Check whether 443 is actually terminating TLS on this host
cr0x@server:~$ sudo ss -lntp | egrep ':(80|443)\s'
LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=1642,fd=6))
LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=1642,fd=7))
Meaning: Nginx listens on both ports locally. That suggests TLS termination is here (unless a proxy tunnels differently).
Decision: If you are behind a load balancer, confirm whether it connects to origin on 443 or 80.
Task 7: Verify what certificate Nginx is serving (confirms you’re hitting the right box)
cr0x@server:~$ echo | openssl s_client -connect 127.0.0.1:443 -servername example.com 2>/dev/null | openssl x509 -noout -subject -issuer -dates
subject=CN = example.com
issuer=C = US, O = Let's Encrypt, CN = R11
notBefore=Dec 1 00:12:33 2025 GMT
notAfter=Mar 1 00:12:32 2026 GMT
Meaning: TLS termination is indeed at Nginx and the SNI matches.
Decision: Focus on Nginx 443 logic and upstream app redirects rather than external TLS termination.
Task 8: Inspect the access log for repeated 301/302 patterns
cr0x@server:~$ sudo tail -n 8 /var/log/nginx/access.log
203.0.113.44 - - [30/Dec/2025:11:18:19 +0000] "GET / HTTP/2.0" 301 169 "-" "curl/8.10.1"
203.0.113.44 - - [30/Dec/2025:11:18:19 +0000] "GET / HTTP/2.0" 301 169 "-" "curl/8.10.1"
203.0.113.44 - - [30/Dec/2025:11:18:20 +0000] "GET / HTTP/2.0" 301 169 "-" "curl/8.10.1"
Meaning: The same client is being redirected repeatedly on the same path.
Decision: This is likely an unconditional redirect in the 443 server, or an app redirecting and Nginx re-redirecting.
Task 9: Turn on a dedicated redirect debug log format temporarily
cr0x@server:~$ sudo install -m 0644 /dev/null /var/log/nginx/redirect-debug.log
cr0x@server:~$ sudo tee /etc/nginx/conf.d/redirect-debug.conf >/dev/null <<'EOF'
log_format redir '$remote_addr "$host" $server_name $scheme '
'xfp="$http_x_forwarded_proto" xfh="$http_x_forwarded_host" '
'"$request" => $status loc="$sent_http_location" '
'ua="$http_user_agent"';
access_log /var/log/nginx/redirect-debug.log redir;
EOF
cr0x@server:~$ sudo nginx -t && sudo systemctl reload nginx
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Meaning: You now log scheme and forwarded headers for every request.
Decision: Reproduce once, then read the debug log to see which signal drove the redirect.
Task 10: Reproduce and read the redirect debug log
cr0x@server:~$ curl -IL https://example.com/ -H 'Host: example.com'
HTTP/2 301
server: nginx
location: https://example.com/
cr0x@server:~$ sudo tail -n 3 /var/log/nginx/redirect-debug.log
203.0.113.44 "example.com" example.com https xfp="-" xfh="-" "GET / HTTP/2.0" => 301 loc="https://example.com/" ua="curl/8.10.1"
Meaning: Nginx believes scheme is https, yet still returned 301 to the same URL.
Decision: That redirect is not conditional on scheme; it’s a bad canonicalization rule or a rewrite/return in the wrong server/location.
Task 11: List enabled sites and confirm you don’t have dueling server blocks
cr0x@server:~$ ls -l /etc/nginx/sites-enabled/
total 0
lrwxrwxrwx 1 root root 34 Dec 30 09:05 default -> /etc/nginx/sites-available/default
lrwxrwxrwx 1 root root 39 Dec 30 09:07 example -> /etc/nginx/sites-available/example
Meaning: The default site is still enabled. On new installs, it often includes a redirect or a catch-all.
Decision: If your “default” server block is doing redirects, it can intercept unexpected hostnames. Disable it unless you actively use it.
Task 12: Find which server block answers for a hostname (default_server traps)
cr0x@server:~$ sudo nginx -T 2>/dev/null | awk '
/server \{/ {in=1; blk=""; next}
in {blk=blk $0 "\n"}
/\}/ && in {print "----\n" blk; in=0}' | egrep -n 'listen 80|listen 443|server_name'
3: listen 80 default_server;
4: server_name _;
11: listen 80;
12: server_name example.com www.example.com;
19: listen 443 ssl http2;
20: server_name example.com www.example.com;
Meaning: There is an explicit default_server on 80 that will answer anything. Fine if it’s a benign 444/404; dangerous if it redirects.
Decision: Make your default server return 444 or a minimal 404—not a canonical redirect—unless you control all inbound Hostnames.
Task 13: Check whether the application upstream is doing the redirect
cr0x@server:~$ sudo grep -R "proxy_pass" -n /etc/nginx/sites-available/example
42: proxy_pass http://127.0.0.1:8080;
cr0x@server:~$ curl -I http://127.0.0.1:8080/
HTTP/1.1 301 Moved Permanently
Location: https://example.com/
Server: gunicorn
Meaning: The app is redirecting to HTTPS itself. If Nginx also redirects, you can easily loop (especially behind a proxy).
Decision: Either let the app handle HTTPS redirects and configure forwarded headers correctly, or disable app-level redirect and let Nginx own it.
Task 14: Check for HSTS that makes your browser “fight” your tests
cr0x@server:~$ curl -I https://example.com/ | egrep -i 'strict-transport-security|location|http/'
HTTP/2 301
location: https://example.com/
strict-transport-security: max-age=31536000; includeSubDomains
Meaning: If HSTS is present, browsers will force HTTPS for that host for the duration of max-age.
Decision: During a redirect incident, test with curl and fresh hosts; don’t trust your “worked in incognito” folklore.
Task 15: Watch Nginx errors during reloads and request handling
cr0x@server:~$ sudo journalctl -u nginx -n 50 --no-pager
Dec 30 11:14:02 server nginx[1642]: 2025/12/30 11:14:02 [notice] 1642#1642: signal process started
Dec 30 11:18:20 server nginx[1644]: 2025/12/30 11:18:20 [info] 1644#1644: *912 client 203.0.113.44 closed keepalive connection
Meaning: No reload failures. Logs won’t directly say “redirect loop” but will show config reload issues and request patterns.
Decision: If reloads fail, fix syntax/permissions first. Otherwise keep working with access logs and curl traces.
The right fix: one canonical redirect path
The reliable pattern is boring. That’s why it works.
- Have one server block for HTTP that only redirects to HTTPS canonical host.
- Have one server block for HTTPS canonical host that serves content (or proxies).
- Optionally, have one HTTPS server block for the non-canonical host that redirects to canonical.
- Never redirect HTTPS requests to the same HTTPS URL again (yes, people do this accidentally with variables).
- Behind a reverse proxy, don’t use
$schemeunless TLS terminates at Nginx.
Why loops happen: canonical host + HTTPS “helpfulness” colliding
Common loop shape:
- CDN terminates TLS, sends HTTP to origin.
- Origin Nginx says “if scheme is http, redirect to https”. It sees
$scheme=httpand redirects. - Client follows to HTTPS at CDN again. CDN repeats HTTP to origin. Origin redirects again. Loop.
Another loop shape:
- Nginx on 443 says “if host is www, redirect to non-www”.
- App says “if host is non-www, redirect to www” (because someone set an env var wrong).
- Loop. Everybody “is correct” in their own worldview.
Joke #2: Nginx redirects are like office politics—once two departments own the same decision, you’re going to circle forever.
Strong guidance: pick an owner, then delete the other owners
If Nginx is your canonical owner:
- Disable framework “force HTTPS” settings, or set them to trust proxy headers and not re-redirect.
- Make Nginx compute canonical host once and apply it consistently.
If the app is your canonical owner:
- Stop doing host/scheme redirects in Nginx. Use Nginx as a dumb pipe with correct
X-Forwarded-*headers. - Ensure the app trusts those headers only from your proxy IPs (security matters; open redirects and spoofing are real).
Reference Nginx configs that don’t loop
These patterns are intentionally plain. Fancy configs don’t win uptime prizes.
Replace example.com and upstream details as needed.
Case A: TLS terminates at Nginx (most VPS deployments)
Canonical: https://example.com. Redirect everything else to that.
cr0x@server:~$ sudo tee /etc/nginx/sites-available/example >/dev/null <<'EOF'
# Canonical: https://example.com
# 1) HTTP: redirect to canonical HTTPS host
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
# 2) HTTPS for non-canonical host: redirect to canonical
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://example.com$request_uri;
}
# 3) HTTPS canonical: serve
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Optional: keep this minimal during incidents
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
EOF
cr0x@server:~$ sudo ln -sf /etc/nginx/sites-available/example /etc/nginx/sites-enabled/example
cr0x@server:~$ sudo nginx -t && sudo systemctl reload nginx
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Why this works: no “if ($scheme …)” inside the 443 canonical server. HTTPS requests will not re-redirect based on scheme.
Host redirects happen only for the non-canonical host.
Case B: TLS terminates upstream (load balancer/CDN), origin is HTTP only
If the load balancer connects to Nginx on port 80, your origin cannot rely on $scheme. It will always be http.
In that world, you should typically avoid HTTP→HTTPS redirects at the origin entirely and do them at the edge.
But sometimes you must enforce canonical at origin (compliance, multiple edges, messy corporate DNS). Then:
- Trust
X-Forwarded-Protoonly from the LB/CDN IP ranges. - Use a
mapto compute “real scheme” and redirect only when it’s truly HTTP from the client perspective.
cr0x@server:~$ sudo tee /etc/nginx/conf.d/real-scheme.conf >/dev/null <<'EOF'
# Compute client-facing scheme safely-ish (still requires IP restrictions below).
map $http_x_forwarded_proto $client_scheme {
default $scheme;
"~*^https$" https;
"~*^http$" http;
}
EOF
cr0x@server:~$ sudo tee /etc/nginx/sites-available/example >/dev/null <<'EOF'
# Origin assumes it may be behind a trusted proxy providing X-Forwarded-Proto.
# Canonical: https://example.com
# IMPORTANT: Restrict who can send spoofed X-Forwarded-Proto
# Replace these with your LB/CDN addresses.
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 192.168.0.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
# Redirect only if the *client-facing* scheme is http
if ($client_scheme = http) {
return 301 https://example.com$request_uri;
}
# If it's already HTTPS at the edge, serve/proxy normally.
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $client_scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
EOF
cr0x@server:~$ sudo nginx -t && sudo systemctl reload nginx
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Warning: This “if” is acceptable because it’s a simple return at server level. Still, prefer edge-level redirects when possible.
Also: if you don’t restrict who can set forwarded headers, a client can spoof them and cause security issues.
Case C: Default server should not redirect (reduce blast radius)
Your default server exists to safely handle garbage Host headers. It should not “helpfully” canonicalize to your real domain.
That’s how you end up serving someone else’s domain through your site and creating weird redirect storms.
cr0x@server:~$ sudo tee /etc/nginx/sites-available/default >/dev/null <<'EOF'
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 444;
}
EOF
cr0x@server:~$ sudo nginx -t && sudo systemctl reload nginx
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Meaning: Nginx closes the connection without response. That’s fine for unknown hosts.
Decision: Keep it. It avoids accidental redirect behavior and makes your canonical rules apply only where intended.
Behind a load balancer/CDN: stop trusting the wrong signal
Most redirect loops in 2025 are not “Nginx can’t redirect”. They’re “Nginx is told the wrong story about the request”.
If a proxy terminates TLS, your origin sees HTTP. If your origin insists on redirecting HTTP→HTTPS, you have a hamster wheel.
Recognize the classic proxy loop signature
In curl -IL, you see:
Location: https://example.com/…repeated- Status 301/302 over and over
- And your origin access logs show only HTTP requests, never 443
Decision tree: what to do
- If you can: handle HTTP→HTTPS redirect at the edge (LB/CDN). Disable it at origin. This is the cleanest.
- If you must do it at origin: trust a forwarded scheme header from the proxy, and only from the proxy.
- If you cannot trust headers: use PROXY protocol end-to-end (LB → Nginx) and configure Nginx accordingly.
Check forwarded headers arriving at Nginx
Your redirect debug log from Task 9 already prints:
xfp="$http_x_forwarded_proto" and xfh="$http_x_forwarded_host".
If those are empty, your edge isn’t sending them, or Nginx isn’t receiving the request you think it is.
Why “just set proxy_set_header X-Forwarded-Proto $scheme” can be wrong
That line is often copied into the origin proxy config. It sets X-Forwarded-Proto to the origin’s scheme, not the client’s.
If TLS terminates upstream, $scheme is http. Congratulations, you just told your app “this is HTTP” forever.
If your app enforces HTTPS based on X-Forwarded-Proto, it will redirect constantly. If Nginx enforces HTTPS based on $scheme,
it will also redirect constantly. The loop is not mysterious. It’s obedient.
Security note: forwarded headers are user input
A client can send X-Forwarded-Proto: https directly to your origin if they can reach it.
If you trust that header from the open internet, you can break security assumptions and create weird redirect and cookie behavior.
Restrict by network, or ensure the origin is not directly exposed.
Three corporate mini-stories (realistic, painful, useful)
Mini-story 1: The incident caused by a wrong assumption
A mid-size company moved a customer portal from an on-prem pair of Nginx boxes to a managed load balancer in front of a new Debian fleet.
The migration plan had one sentence that doomed them: “TLS stays the same.” Everyone nodded because it sounded reassuring.
On-prem, Nginx terminated TLS. In the new setup, the load balancer terminated TLS and spoke HTTP to the origin because it was “simpler”
and the team wanted to avoid cert distribution on the instances. The origin Nginx kept the old rule: return 301 https://$host$request_uri;
in the port 80 server.
The first symptom wasn’t an outage in monitors—it was an explosion of support tickets: “Login keeps refreshing.”
Some users could load the homepage (cached). Others couldn’t authenticate because the redirect loop prevented session cookie establishment.
SREs saw 301s in the logs but assumed “redirects are normal; browsers handle them.”
The fix was simple and slightly humiliating: remove HTTP→HTTPS redirect at origin, implement it at the load balancer,
and add a canonical host redirect only at one layer. The postmortem’s best line was the most boring one:
“We assumed $scheme reflected the client scheme.” It didn’t. It never did. It reflected the hop into Nginx.
Mini-story 2: The optimization that backfired
Another place, different industry. The team decided to “optimize” by consolidating server blocks.
One mega server block would handle HTTP and HTTPS, www and non-www, multiple environments,
and would compute the redirect target using variables to keep the config “DRY”.
It started innocently: a map for canonical host, a variable for scheme, and a couple of ifs.
Then came a feature flag for “maintenance mode” and a rewrite for legacy paths.
Someone added return 301 $canonical_scheme://$canonical_host$request_uri; inside a location that matched too broadly.
In staging, it “worked” because staging used only one host and no CDN. In production, behind a proxy,
the canonical scheme variable evaluated to https based on X-Forwarded-Proto—except that one path didn’t have the header
because a WAF rule stripped it for certain user agents.
The result: a redirect loop that only impacted a subset of clients and only on the checkout path. The dashboard looked green.
Revenue looked less green. The optimization saved maybe 40 lines of config and cost days of incident response.
They rolled back to separate server blocks and accepted duplication as a service reliability feature.
Mini-story 3: The boring but correct practice that saved the day
A financial services team had a rule: every Nginx change that touches redirects must include a recorded curl -IL chain
for four URLs: http/non-www, http/www, https/non-www, https/www. The output was pasted into the change request.
People grumbled. It felt bureaucratic.
During a routine certificate renewal window, a new engineer re-enabled the default site accidentally.
The default server on port 80 redirected to the canonical host. Meanwhile, the application enforced a different canonical host
based on its config. That combination created a loop only for requests hitting the default server (i.e., any unrecognized Host).
The change request reviewer caught it because the required curl chains suddenly showed a redirect from an unexpected hostname into production.
The reviewer asked a simple question: “Why is the default_server redirecting anywhere at all?” They fixed it by making the default server return 444.
No incident. No late-night rollback. Just a dull process doing its job. That’s the kind of boring you should learn to love.
Common mistakes: symptoms → root cause → fix
1) Symptom: HTTP→HTTPS loop only when behind CDN/LB
Root cause: TLS terminates at the CDN/LB, origin sees HTTP, origin redirects to HTTPS, CDN repeats.
Fix: Move redirect to the edge, or trust X-Forwarded-Proto from the proxy and base redirect on that.
2) Symptom: https://example.com redirects to itself (same URL)
Root cause: Unconditional return 301 https://example.com$request_uri; inside the 443 server (or mis-scoped location).
Fix: Remove the redirect from the canonical HTTPS server. Redirect only from HTTP server or non-canonical host server.
3) Symptom: Alternating between www and non-www
Root cause: Nginx and application disagree about canonical host (or CDN has its own host redirect).
Fix: Pick one owner of host canonicalization. Disable the other rules. Verify with curl -IL from multiple vantage points.
4) Symptom: Works for some users, loops for others
Root cause: Split behavior via multiple edge POPs, A/B routing, WAF stripping headers, or only certain paths triggering app redirect logic.
Fix: Add logging for $scheme, $host, $sent_http_location, and forwarded headers. Compare failing vs working requests.
5) Symptom: Only your browser fails; curl works
Root cause: Cached 301 or HSTS in browser, or stale cookies that trigger app redirect flows.
Fix: Use curl with -IL. Clear HSTS for the host or test from a clean profile/device. Consider temporarily removing HSTS until stable.
6) Symptom: Redirect loop started after enabling “default” site
Root cause: default_server catches requests and applies a redirect intended for a specific host.
Fix: Make default server return 444/404. Ensure real vhosts have exact server_name and aren’t competing.
7) Symptom: OAuth login breaks with redirect_uri mismatch
Root cause: Canonical host/scheme changes mid-flow; redirects rewrite scheme/host and provider rejects it.
Fix: Enforce canonicalization before auth endpoints, and ensure the app sees correct scheme/host via forwarded headers.
8) Symptom: Websocket endpoints fail while normal pages work
Root cause: Redirect or scheme enforcement applied to /ws paths; upstream expects upgrade and gets 301.
Fix: Exempt websocket locations from redirects; ensure canonicalization happens at server level, not inside websocket locations.
Checklists / step-by-step plan
Step-by-step plan to fix the loop without guessing
-
Capture the redirect chain.
Run curl from a clean environment:cr0x@server:~$ curl -IL http://example.com/ HTTP/1.1 301 Moved Permanently Location: https://example.com/ HTTP/2 200Decision: If the chain alternates scheme/host, you know what you’re fighting.
-
Decide canonical host + scheme in writing. Example: “canonical is https://example.com”.
Decision: Anything else redirects to that, once.
-
Determine TLS termination.
cr0x@server:~$ sudo ss -lntp | egrep ':(80|443)\s' LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=1642,fd=6)) LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=1642,fd=7))Decision: If Nginx doesn’t listen on 443, stop writing $scheme-based redirects at origin.
-
Remove duplicate redirect owners. Check app config for “force SSL” / “canonical host” settings and disable or align them.
Decision: One owner. Not two.
-
Make default server harmless. Return 444 or a minimal 404.
Decision: Reduce blast radius of misrouted Host headers.
-
Use separate server blocks for redirects. HTTP redirect block; HTTPS canonical serve block; optional non-canonical HTTPS redirect block.
Decision: Prefer clarity over cleverness.
-
Add temporary redirect debug logging if uncertain. Capture scheme/host/Location in logs.
Decision: Don’t argue with opinions; argue with log lines.
-
Reload and re-test with curl. Always validate config and reload gracefully.
cr0x@server:~$ sudo nginx -t && sudo systemctl reload nginx nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successfulDecision: If the loop persists, it’s upstream (CDN/app), not Nginx syntax.
Operational checklist (keep this for future you)
- For any redirect change, record curl chains for 4 entry points: http/https × www/non-www.
- Confirm who terminates TLS and where redirects are enforced.
- Ensure
default_serverdoes not redirect to a real domain. - Behind proxies: verify
X-Forwarded-Protopresence and trust boundaries. - Don’t deploy both CDN “force HTTPS” and origin “force HTTPS” unless you fully understand the hop behavior.
- Be cautious with HSTS; don’t enable it until redirects are stable and verified.
FAQ
1) Why does my browser say “Too many redirects” but curl sometimes works?
Browsers cache 301s and enforce HSTS. Curl usually doesn’t cache across runs.
If you previously served HSTS, your browser might force HTTPS and never hit your HTTP redirect logic the way you expect.
2) Should I use 301 or 302 for canonical redirects?
For stable canonicalization (www→non-www, http→https), use 301 or 308. If you’re actively experimenting, use 302/307 to avoid sticky caching.
In production, be deliberate: 301 can stick around longer than your patience.
3) Is it okay to use “if” in Nginx for redirects?
A simple if that immediately returns at the server level is usually fine.
The bad reputation comes from complex rewrites and nested logic. Prefer separate server blocks whenever you can.
4) What’s the fastest way to see the redirect chain?
Use curl -IL. Add --max-redirs 20 if needed. You’re looking for what changes each hop: scheme, host, or path.
5) I’m behind a CDN in “Flexible SSL”. Why does that cause loops?
Flexible SSL often means: browser↔CDN is HTTPS, CDN↔origin is HTTP.
Your origin sees HTTP and redirects to HTTPS. The CDN keeps using HTTP to origin. Loop.
Fix by using full TLS to origin or moving redirects to the CDN and disabling them on origin.
6) Can the application framework cause the loop even if Nginx config looks right?
Absolutely. Many frameworks redirect based on perceived scheme/host.
If the app sees X-Forwarded-Proto: http (or none), it might redirect to HTTPS. If Nginx also redirects, or the host differs, you loop.
7) Why is leaving the default site enabled risky?
Because default_server catches unmatched hosts. If it redirects to your canonical domain,
you can accidentally serve or redirect traffic for unexpected Host headers, including typos and hostile probes.
Make the default server return 444/404 instead.
8) How do I know whether the CDN/LB is sending X-Forwarded-Proto?
Log it in Nginx ($http_x_forwarded_proto) and check. If it’s empty, either it’s not being sent, it’s stripped, or you’re not hitting the path you think.
Don’t assume; verify in logs.
9) Should I enable HSTS while fixing redirects?
No. Get redirects correct first. Then enable HSTS once you’re confident HTTPS is consistently served on the canonical host.
HSTS is a commitment device—great when you’re right, annoying when you’re wrong.
10) I fixed Nginx but users still report loops. What now?
Check the edge (CDN/LB) redirect rules, cache settings, and whether multiple origins exist.
Then check app-level redirects. Finally, consider browser caching/HSTS. The loop may be outside the box you just edited.
Conclusion: next steps that actually reduce incident time
Redirect loops are simple in theory and annoying in practice because multiple layers feel entitled to “fix” the URL.
Your job is to make one layer the adult in the room and fire the rest from that responsibility.
Practical next steps:
- Run
curl -ILfor http/https × www/non-www and save the chain in your incident notes. - Decide canonical URL (host + scheme) and choose the single owner for redirects.
- If behind a proxy/CDN, stop using
$schemeas “client scheme” unless TLS terminates at Nginx. - Make
default_serverreturn 444/404, not redirects. - Reload Nginx safely (
nginx -tthensystemctl reload) and re-test the chain until it converges in 1–2 hops. - Only after stability: consider enabling HSTS, with intention and a rollback plan.
Once you’ve done that, “Too many redirects” becomes what it should be: a quick fix, not a personality trait.