Debian 13: Fix “Too many redirects” in Nginx by correcting canonical and HTTPS loops (case #71)

Was this helpful?

You changed “one small redirect” and suddenly every browser screams “Too many redirects.” The site looks like it’s trying to teleport between HTTP and HTTPS, or between www and apex, until the browser gives up. Nginx logs look innocent. Your load balancer swears it’s not involved. And everyone wants it fixed before the next meeting.

This is case #71: the canonical/HTTPS loop. It’s boring, common, and perfectly avoidable—if you stop guessing and start validating what the client actually sees, what Nginx thinks it sees, and what your upstream app is doing.

What “Too many redirects” really means in Nginx terms

Browsers follow HTTP redirects automatically. They’ll follow a lot of them—until they won’t. When you see “Too many redirects,” it’s not a moral judgment. It’s a loop: request A causes redirect to B; request B causes redirect to A (or to C, which eventually comes back to A). The browser stops to avoid infinite ping-pong.

In Nginx land, a loop usually comes from one of these patterns:

  • Scheme loop: HTTP → HTTPS → HTTP (often proxy header confusion).
  • Canonical host loop: example.comwww.example.comexample.com (two redirect rules disagree).
  • Path normalization loop: /app/app//app (slash handling in Nginx vs app vs upstream).
  • Port loop: redirect includes explicit :443 or :80 and something “fixes” it back.
  • Mixed layers: CDN/LB redirects plus Nginx redirects plus application redirects.

Opinionated guidance: if you have both Nginx and the application doing canonicalization, pick one. Dueling redirects are like dueling “source of truth” spreadsheets: everyone loses.

One operational reality: redirect loops are diagnosable in minutes with a terminal and discipline. The fastest fix is not “try another rewrite.” The fastest fix is to collect the redirect chain, identify the toggling attribute (scheme/host/path), and then remove one of the two competing rules.

Facts and context that make this problem less mysterious

  • HTTP redirects are ancient. The 301/302 status codes go back to early HTTP specs; “Moved Permanently” predates most of today’s web stacks.
  • 301 became a cache weapon. Browsers and intermediaries can cache 301 aggressively; debugging becomes “I fixed it but my laptop didn’t.”
  • 307/308 exist for a reason. 302 historically changed POST to GET in some clients; 307/308 preserve method semantics more consistently.
  • Nginx’s return is safer than rewrite. The old rewrite engine is powerful, but it’s easy to create loops or accidentally keep rewriting internally.
  • Canonical host redirects started as SEO hygiene. Search engines penalized duplicate content; ops inherited the pain when SEO and TLS got layered on.
  • Proxies changed what “HTTPS” means. If TLS terminates at a load balancer, Nginx sees plain HTTP unless you teach it about X-Forwarded-Proto.
  • HSTS raised the stakes. Once you enable HSTS, clients will try HTTPS no matter what you think you configured; broken HTTPS redirection becomes user-visible immediately.
  • CDNs love to “help.” Many CDNs can enforce HTTPS or rewrite hosts. That’s great until Nginx does it too.

Fast diagnosis playbook (first/second/third)

First: capture the redirect chain exactly as the client sees it

  1. Use curl -I -L and record Location, status codes, and whether the host/scheme changes each hop.
  2. Check whether the loop alternates between HTTP/HTTPS or between hosts (apex vs www).
  3. Confirm whether the redirect is coming from Nginx or something upstream by looking at response headers (Server, custom headers, or a tracing header you add).

Second: verify what Nginx thinks the request is

  1. Inspect Nginx config for return 301, rewrite, if blocks, and duplicated server blocks for the same name.
  2. Look at access logs with $scheme, $host, and $http_x_forwarded_proto (temporarily add a debug log format if needed).
  3. If behind a proxy/CDN, confirm whether you’re trusting forwarded headers only from known IPs.

Third: isolate layers

  1. Bypass CDN/LB if possible (direct to origin IP with a Host header).
  2. Temporarily disable the application’s canonical redirects or set a base URL explicitly to match your Nginx policy.
  3. Re-test and stop only when the chain is one redirect at most (ideally zero for already-canonical requests).

Paraphrased idea (attributed): “Hope is not a strategy.” — paraphrased idea commonly attributed to reliability and operations leadership. Treat redirects the same way: validate, don’t vibe-check.

Redirect anatomy: canonical host, scheme, and path

The canonical host decision (pick one and enforce it once)

Canonical host means you decide whether the site “lives” at example.com or www.example.com. Either is fine; indecision is not. Enforce it at one layer—preferably at the edge (Nginx) because it’s cheap and consistent.

Two competing canonical rules is the classic loop:

  • Nginx forces www.
  • The app forces apex (or a CDN does).
  • Result: bounce forever.

The canonical scheme decision (HTTPS, and be honest about it)

If you terminate TLS on Nginx, $scheme is real. If TLS terminates before Nginx, $scheme lies (it will be http), unless you set and trust X-Forwarded-Proto or the standardized Forwarded header. Many “HTTPS redirect” snippets on the internet assume Nginx sees TLS. That assumption is how case #71 happens.

The canonical path decision (slashes and index files)

Path loops happen when multiple components “normalize” differently. Nginx might redirect /app to /app/ due to try_files or autoindex settings, while the app redirects back to /app because it thinks routes should not end with a slash. Pick the canonical policy and implement it in one place.

Joke #1 (short, relevant): Redirect loops are just load tests you didn’t budget for.

Practical tasks: commands, outputs, and the decision you make

These are not “run this and hope.” Each task includes what to look for and what decision it drives. Run them on Debian 13, but the logic is portable.

Task 1: Reproduce with a full redirect trace

cr0x@server:~$ curl -sS -D- -o /dev/null -L -I http://example.com/
HTTP/1.1 301 Moved Permanently
Server: nginx
Location: https://example.com/
HTTP/2 301
server: nginx
location: http://example.com/
HTTP/1.1 301 Moved Permanently
Server: nginx
Location: https://example.com/

What it means: Scheme flips HTTPS → HTTP → HTTPS. That’s a loop. If you see alternating hosts instead, you have a canonical host fight.

Decision: Stop tweaking paths. Go straight to scheme/canonical logic and check proxy headers.

Task 2: Show only the Location headers (quick loop signature)

cr0x@server:~$ curl -sS -I http://example.com/ | sed -n 's/^Location: //p'
https://example.com/

What it means: First hop is HTTP → HTTPS. Fine by itself.

Decision: Now test the HTTPS endpoint to see who sends you back to HTTP.

Task 3: Inspect HTTPS response without following redirects

cr0x@server:~$ curl -sS -I https://example.com/ | sed -n '1p;/^Location:/p'
HTTP/2 301
location: http://example.com/

What it means: Something serving HTTPS is redirecting to HTTP. That “something” could be Nginx, the app, or a proxy in front.

Decision: Identify which layer emitted this response (headers, logs, and bypass tests).

Task 4: Confirm which Nginx is answering (header fingerprint)

cr0x@server:~$ curl -sS -I https://example.com/ | grep -iE '^(server:|via:|x-cache:|x-served-by:|cf-|x-amz-)'
server: nginx

What it means: Not definitive, but if you see CDN-specific headers, you’re not talking directly to your Nginx.

Decision: If you suspect an intermediary, bypass it next.

Task 5: Bypass DNS/CDN and hit origin IP with a Host header

cr0x@server:~$ curl -sS -I --resolve example.com:443:203.0.113.10 https://example.com/ | sed -n '1p;/^location:/Ip'
HTTP/2 301
location: http://example.com/

What it means: Even when hitting origin, HTTPS redirects to HTTP. That’s likely Nginx or the upstream app behind Nginx.

Decision: Check Nginx config and upstream application redirects.

Task 6: Dump active Nginx configuration (no guessing which file is included)

cr0x@server:~$ sudo nginx -T 2>/dev/null | sed -n '1,80p'
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
...

What it means: You have the rendered config including include files. This is the ground truth.

Decision: Search this output for redirect rules and duplicate server blocks.

Task 7: Find redirect directives and suspicious “if” conditions

cr0x@server:~$ sudo nginx -T 2>/dev/null | grep -nE 'return 30[12]|rewrite |if \(|server_name|listen 80|listen 443'
412:    listen 80;
417:    server_name example.com www.example.com;
420:    if ($scheme = http) { return 301 https://$host$request_uri; }
612:    listen 443 ssl http2;
618:    if ($scheme = https) { return 301 http://$host$request_uri; }

What it means: You literally have opposite redirects: HTTP→HTTPS in one server, HTTPS→HTTP in the other. That’s your loop.

Decision: Remove the incorrect HTTPS→HTTP redirect. Replace with a single canonical policy.

Task 8: Verify where requests land (which server block) using access logs

cr0x@server:~$ sudo tail -n 3 /var/log/nginx/access.log
203.0.113.55 - - [30/Dec/2025:11:32:18 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/8.5.0"
203.0.113.55 - - [30/Dec/2025:11:32:18 +0000] "GET / HTTP/2.0" 301 169 "-" "curl/8.5.0"
203.0.113.55 - - [30/Dec/2025:11:32:19 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/8.5.0"

What it means: Same client, repeated 301s. You need more context: host, scheme, and forwarded proto.

Decision: Temporarily add a debug log format that prints the important variables.

Task 9: Add a temporary log_format to expose scheme/host/forwarded proto

cr0x@server:~$ sudo tee /etc/nginx/conf.d/zz-debug-logformat.conf >/dev/null <<'EOF'
log_format diag '$remote_addr host=$host scheme=$scheme '
               'xfp=$http_x_forwarded_proto uri=$request_uri '
               'status=$status loc=$sent_http_location';
access_log /var/log/nginx/access_diag.log diag;
EOF

What it means: You’ve created a dedicated access log for diagnosis without touching your main format.

Decision: Reload Nginx and run one curl request; then read the diag log.

Task 10: Reload Nginx safely and confirm config is valid

cr0x@server:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
cr0x@server:~$ sudo systemctl reload nginx

What it means: No syntax errors; reload applied.

Decision: Now you can trust the diag log to reflect current behavior.

Task 11: Generate one request and read the diagnostic log

cr0x@server:~$ curl -sS -I https://example.com/ >/dev/null
cr0x@server:~$ sudo tail -n 1 /var/log/nginx/access_diag.log
203.0.113.55 host=example.com scheme=https xfp= uri=/ status=301 loc=http://example.com/

What it means: Nginx sees scheme=https (so TLS is likely on Nginx), yet still returns a redirect to http://. That’s an explicit config rule, not proxy confusion.

Decision: Remove any HTTPS→HTTP redirect. If you need HTTP internally, keep it internal—don’t redirect clients down to HTTP.

Task 12: If behind a proxy, verify what forwarded proto looks like

cr0x@server:~$ curl -sS -I --resolve example.com:80:203.0.113.10 http://example.com/ -H 'X-Forwarded-Proto: https' | sed -n '1p;/^Location:/p'
HTTP/1.1 301 Moved Permanently
Location: https://example.com/

What it means: When you tell Nginx the original scheme was HTTPS, it chooses HTTPS. Good: your logic can be made proxy-aware.

Decision: Implement forwarded-proto handling correctly and securely (trust only known proxy IPs).

Task 13: Identify who is listening on ports 80/443 (avoid shadow services)

cr0x@server:~$ sudo ss -ltnp | grep -E ':(80|443)\s'
LISTEN 0      511          0.0.0.0:80        0.0.0.0:*    users:(("nginx",pid=1234,fd=6))
LISTEN 0      511          0.0.0.0:443       0.0.0.0:*    users:(("nginx",pid=1234,fd=7))

What it means: Nginx is the only listener on 80/443. If you saw something else (Apache, a dev server), you’d be debugging the wrong process.

Decision: If ports are contested, fix that first. Redirect logic is irrelevant if the wrong daemon answers.

Task 14: Inspect the specific vhost files enabled on Debian

cr0x@server:~$ ls -l /etc/nginx/sites-enabled/
total 0
lrwxrwxrwx 1 root root 34 Dec 30 10:58 example.conf -> ../sites-available/example.conf

What it means: You have one enabled site. If you have multiple files with overlapping server_name, expect unpredictable matches.

Decision: Ensure exactly one canonical server block “owns” each hostname.

Task 15: Test Nginx’s server selection with explicit Host headers

cr0x@server:~$ curl -sS -I http://203.0.113.10/ -H 'Host: example.com' | sed -n '1p;/^Location:/p'
HTTP/1.1 301 Moved Permanently
Location: https://example.com/
cr0x@server:~$ curl -sS -I http://203.0.113.10/ -H 'Host: www.example.com' | sed -n '1p;/^Location:/p'
HTTP/1.1 301 Moved Permanently
Location: https://www.example.com/

What it means: Both hosts redirect to themselves on HTTPS. If you intended to canonicalize to apex only, this is wrong (but not necessarily a loop).

Decision: Decide canonical host and enforce it explicitly (one redirect, not two parallel worlds).

Task 16: Validate that the app is not emitting its own scheme/host redirects

cr0x@server:~$ curl -sS -I http://127.0.0.1:8080/ | sed -n '1p;/^Location:/p;/^Server:/p'
HTTP/1.1 301 Moved Permanently
Server: gunicorn
Location: https://example.com/

What it means: Your upstream app is redirecting to HTTPS itself. That can be fine, but only if it agrees with Nginx. If Nginx also redirects (or worse, redirects opposite), loops happen.

Decision: Choose: canonical redirects in Nginx or in the app. Then disable the other.

Fix patterns that actually stick (with correct Nginx config)

Pattern A: TLS terminates on Nginx (simplest, most reliable)

This is the cleanest setup: clients connect to Nginx on 443, and Nginx knows the true scheme. Your redirects can use $scheme safely because it reflects reality.

Rules:

  • Port 80 server: redirect everything to the canonical HTTPS host.
  • Port 443 server: serve content; optionally redirect non-canonical hosts to the canonical host (still HTTPS).
  • No “if scheme is https then redirect to http.” Ever. If you need to support plain HTTP for a private network, do it on a different hostname or listener, not by downgrading public users.
cr0x@server:~$ sudo tee /etc/nginx/sites-available/example.conf >/dev/null <<'EOF'
# Canonical policy:
# - canonical host: example.com (no www)
# - canonical scheme: https
# - all HTTP requests redirect to https://example.com$request_uri
# - all HTTPS requests to www redirect to https://example.com$request_uri

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    return 301 https://example.com$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Your normal site config:
    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name www.example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    return 301 https://example.com$request_uri;
}
EOF

Why this works: the canonical decision happens exactly once for each non-canonical entry point. You never redirect from canonical → non-canonical.

Pattern B: TLS terminates upstream (load balancer/CDN), Nginx only sees HTTP

This is where people trip. Nginx sees $scheme=http, because the LB connects to Nginx over plain HTTP. If you write “if scheme is http redirect to https,” you just forced the LB-to-origin hop to redirect too. Depending on how the LB handles it, you can create a loop or at least unnecessary redirects.

What you actually want is: “If the client used HTTP, redirect to HTTPS.” That “client scheme” must come from a trusted header.

Do it like an adult: trust X-Forwarded-Proto only from your proxy IP ranges, and use a variable that represents the original scheme.

cr0x@server:~$ sudo tee /etc/nginx/conf.d/forwarded-proto.conf >/dev/null <<'EOF'
# Trust X-Forwarded-Proto only from known proxies/LBs.
# Replace these with your actual proxy subnets.
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 192.168.0.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;

# Derive a client-facing scheme.
map $http_x_forwarded_proto $client_scheme {
    default $scheme;
    https https;
    http  http;
}
EOF

Now use $client_scheme in redirect logic instead of $scheme:

cr0x@server:~$ sudo tee /etc/nginx/sites-available/example.conf >/dev/null <<'EOF'
# Canonical policy behind a TLS-terminating proxy:
# - canonical host: example.com
# - canonical scheme: https (as seen by the client)
# - Nginx listens on 80 only (proxy-to-origin), but still enforces canonical policy
#   using X-Forwarded-Proto from trusted proxies.

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    # Redirect non-https clients to https canonical.
    if ($client_scheme != "https") {
        return 301 https://example.com$request_uri;
    }

    # Redirect www to apex (still https).
    if ($host = "www.example.com") {
        return 301 https://example.com$request_uri;
    }

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $client_scheme;
    }
}
EOF

Yes, I used if inside server. That’s one of the few places where it’s fine. Avoid if inside location blocks for rewrite gymnastics; server-level redirects are straightforward and predictable.

Hard rule: if you can push canonical redirects to the edge (LB/CDN), do it there and disable them in Nginx. But don’t split responsibility across layers unless you enjoy emergency calls.

Pattern C: Path normalization—stop the slash ping-pong

If your loop toggles trailing slashes, you need to unify policy. Nginx can enforce it, but if your app also enforces it, pick one.

A common safe choice is: “directories end with slash, files don’t.” Nginx already has opinions here. If your app router hates trailing slashes, turn off Nginx’s automatic directory redirects by aligning try_files and routes, or let the app own it fully.

Behind a proxy/CDN: trusting headers without lying to yourself

Forwarded headers are both necessary and dangerous. Necessary because your origin server doesn’t see the client’s TLS. Dangerous because any random client can send X-Forwarded-Proto: https and trick naive configs into generating HTTPS links, marking secure cookies, or skipping redirects.

Make “trust” explicit

Trust should be conditional on source IP. On Debian 13, Nginx is typically packaged with sane defaults, but it won’t guess your network boundaries. You must set set_real_ip_from to your actual proxy ranges.

Know which header your proxy emits

Many systems use X-Forwarded-Proto. Some use the standardized Forwarded header. Some set a vendor-specific header. The point is not the name; it’s consistency across the chain.

Avoid the “absolute redirect” surprise

Nginx can generate absolute redirects. If you accidentally leak internal hostnames (like origin.internal) into the Location header, your users will learn your network map for free. That’s not a prize you want to win.

If you see redirects going to the wrong hostname, you probably used $host when you meant a fixed canonical domain, or your proxy is rewriting Host unexpectedly.

When it’s not Nginx: the application fighting you

Some frameworks will redirect to a “base URL,” force HTTPS, or enforce trailing slash rules. If the app runs behind a proxy and doesn’t understand forwarded headers, it may think every request is HTTP and “upgrade” it—while Nginx (or the proxy) downgrades it back, or vice versa.

Three tactical moves that fix real production systems:

  • Set the application’s external URL explicitly (base URL / public URL). Many systems have a single config for this, and it prevents host/scheme confusion.
  • Ensure the app honors forwarded headers only from trusted proxy IPs (same idea as Nginx).
  • Decide who owns redirects. If the app needs them for routing, let the app do path redirects and let Nginx handle only scheme/host canonicalization—or the other way around. Just don’t duplicate.

Joke #2 (short, relevant): If two components both “enforce canonical URLs,” they’ll eventually agree—right after the outage ends.

Three corporate mini-stories from the redirect trenches

Mini-story 1: An incident caused by a wrong assumption

The company had a tidy setup: a managed load balancer terminated TLS, then forwarded traffic to Nginx on port 80 in a private network. Someone added a “simple” rule in Nginx: redirect HTTP to HTTPS. They tested it by curling the origin directly over HTTP and saw the expected 301. Shipped.

In production, the load balancer connected to Nginx over HTTP (as designed). Nginx saw $scheme=http for every request. So it redirected every request to HTTPS. The load balancer dutifully followed redirects in its health checks and started failing them because it couldn’t negotiate TLS to the origin that didn’t speak TLS. The pool drained. The site went dark, even though “the redirect was correct.”

The fix was not magical: remove scheme redirects from the origin, and enforce HTTPS at the load balancer. If origin enforcement is required, use X-Forwarded-Proto and trust it only from the balancer subnet. The real lesson: don’t write redirects based on what the origin sees if the origin is not the TLS endpoint.

Afterwards, they added a deployment check that runs curl -I against both the public endpoint and the origin bypass path, and compares the redirect chain. The next time someone tried to “upgrade” redirects, the pipeline caught the mismatch before customers did.

Mini-story 2: An optimization that backfired

A different org tried to reduce redirect hops for performance. Their goal: “no redirects ever.” They removed the port 80 redirect and instead configured the CDN to do the canonicalization. On paper, that’s cleaner: fewer round trips, less origin load, and consistent behavior globally.

Then they rolled out a second CDN rule: normalize www to apex. Meanwhile, the application still enforced www because a legacy integration expected it. This didn’t show up in basic monitoring because the site still “loaded” for some users—depending on cache state and which hostname they entered.

The worst part was the intermittency. The CDN cached 301 responses in some POPs. Some users saw the loop. Some didn’t. Internal teams “couldn’t reproduce,” which is corporate for “I tried once and got bored.”

They recovered by declaring one canonical policy and implementing it in exactly one place: the CDN. Then they disabled the application’s host enforcement and set the application base URL to the canonical domain. Performance improved and stayed improved, because the redirect policy stopped oscillating across layers.

Mini-story 3: A boring but correct practice that saved the day

A payments team had a rule: every edge behavior change required a “redirect chain snapshot” attached to the change request. It was dull. People complained. But it forced clarity: what happens for HTTP apex, HTTP www, HTTPS apex, HTTPS www, and one weird path with a query string.

During a routine Debian 13 upgrade, Nginx config was refactored. A new engineer unintentionally duplicated a server_name across two enabled vhost files. Nginx didn’t error; it just picked one server block for some requests based on matching precedence. Redirects became inconsistent.

Because they had the snapshot habit, the reviewer spotted that HTTPS www was redirecting to HTTP apex—clearly wrong—before the change reached production. No heroics, no incident bridge, no “we’ll do a postmortem.” Just a rejected change and a fix.

The practice didn’t feel innovative. It was. The best ops work often looks like paperwork until the day it prevents a fire.

Common mistakes (symptom → root cause → fix)

1) Symptom: HTTP ↔ HTTPS ping-pong

Root cause: TLS terminates at a proxy, but origin redirects based on $scheme or an untrusted forwarded header. Or there’s an explicit HTTPS→HTTP redirect left over from an old migration.

Fix: Enforce scheme at the TLS endpoint. If Nginx must enforce it behind a proxy, use $http_x_forwarded_proto mapped to a $client_scheme, and trust it only from proxy IPs.

2) Symptom: www ↔ apex ping-pong

Root cause: Nginx canonicalizes to www, app canonicalizes to apex (or vice versa). Sometimes the CDN canonicalizes one way and the origin the other.

Fix: Pick one canonical host and enforce it in one layer. Disable the other layer’s host redirects or set its base URL to match.

3) Symptom: Only some users see the loop

Root cause: Cached 301s in browsers, CDNs, or corporate proxies. Or you have multiple origins/instances with different configs.

Fix: Purge CDN caches for redirect responses if applicable. Test with a clean client and curl. Verify config consistency across instances.

4) Symptom: Loop appears only on a specific path

Root cause: Trailing slash normalization differs between Nginx and the app for that route, or try_files triggers a directory redirect that the app rejects.

Fix: Define one path policy. Either let the app handle it and avoid Nginx path redirects, or implement explicit, consistent redirects in Nginx and disable app normalization.

5) Symptom: Redirects go to an internal hostname or wrong port

Root cause: Misused $host behind a proxy that rewrites host, or application generates absolute URLs based on internal listen address. Sometimes it’s proxy_redirect doing “helpful” rewriting.

Fix: Use a fixed canonical domain in return 301. Configure upstream to know its public URL. Audit proxy_redirect settings.

6) Symptom: Browser keeps looping even after you fixed config

Root cause: Cached 301 in browser, or HSTS forcing HTTPS and exposing another redirect issue, or you didn’t reload Nginx (happens more than anyone admits).

Fix: Verify with curl from a clean environment. Confirm reload and active config with nginx -T. If HSTS is enabled, ensure HTTPS endpoint is correct before touching HTTP behavior.

Checklists / step-by-step plan

Step-by-step: fix canonical + HTTPS loops the safe way

  1. Write down your canonical policy in one sentence: “Canonical is https://example.com (no www).” If you can’t write it, you can’t enforce it.
  2. Collect redirect chains for four entry points: HTTP apex, HTTP www, HTTPS apex, HTTPS www. Record status codes and Location targets.
  3. Bypass intermediaries (CDN/LB) to see if origin itself loops.
  4. Render the active Nginx config with nginx -T and search for all redirect directives.
  5. Eliminate contradictions: remove any rule that redirects canonical → non-canonical.
  6. Decide where scheme enforcement lives: if TLS terminates at the LB/CDN, enforce HTTPS there, not at origin—unless you implement trusted forwarded-proto logic.
  7. Decide where host enforcement lives: do it at the edge (Nginx or CDN), and disable app host redirects or configure base URL accordingly.
  8. Reload safely (nginx -t then systemctl reload), re-test with curl, then test with a browser.
  9. Remove temporary debug logging after you’ve confirmed stability. Diagnostics are great; permanent noise is not.
  10. Add a regression test: a scripted check that verifies those four entry points have at most one redirect and land on the canonical URL.

Operational checklist: before you declare victory

  • Canonical URL returns 200 (or your expected app status), not 301.
  • Non-canonical URLs return exactly one redirect to canonical.
  • Redirect target never downgrades HTTPS to HTTP.
  • No redirect points to an internal hostname, private IP, or unexpected port.
  • Logs confirm correct Host and scheme variables.
  • Health checks (LB/CDN) do not follow redirects that break origin reachability.

FAQ

1) Why does the browser say “Too many redirects” but Nginx error log is quiet?

Because redirects are not errors to Nginx. A 301 is a normal response. The browser is the one detecting the loop after following repeated redirects.

2) Should I use rewrite or return 301 in Nginx?

Use return 301 for canonical host/scheme redirects. It’s clearer and less loop-prone. Use rewrite only when you truly need regex-based URI manipulation.

3) My TLS terminates at the load balancer. Is if ($scheme = http) always wrong?

It’s wrong for deciding what the client used, because $scheme reflects the LB-to-origin hop. Use a trusted forwarded proto header and map it to a variable like $client_scheme.

4) Can I rely on X-Forwarded-Proto from the internet?

No. Any client can send it. Trust it only from known proxy IP ranges. Otherwise you’re letting attackers influence security-sensitive behavior.

5) Why do I see different behavior between curl and the browser?

Browsers cache 301s, may enforce HSTS, and sometimes have cached DNS or service worker behavior. Curl is usually “fresh” unless you script caching. If the browser differs, test in a private window and confirm HSTS status.

6) What status code should I use for HTTP → HTTPS?

For typical sites, 301 is fine. If you’re redirecting POST requests and care about method preservation, consider 308. Consistency matters more than fashion.

7) I fixed the loop, but now I get redirected to the wrong hostname. Why?

Likely because you used $host in the redirect and the incoming Host header isn’t what you thought (proxy rewrites, alternate domains). For canonicalization, prefer a fixed domain in the return.

8) How do I stop trailing slash loops?

Pick one policy and implement it once. If your app wants “no trailing slash,” disable Nginx behaviors that add it implicitly and configure the app to generate consistent links. If Nginx owns it, make the redirect explicit and ensure the app doesn’t redirect back.

9) Does Debian 13 change anything about Nginx redirects?

Not fundamentally. The failure modes are the same. What changes in practice is that upgrades often reorder included configs or enable new site files, increasing the chance of duplicate server blocks and conflicting redirects.

Conclusion: next steps that prevent repeat incidents

Redirect loops are not mysterious. They’re contradictory policies executed faithfully. Your job is to remove contradiction.

Next steps that pay off immediately:

  • Declare a canonical URL (scheme + host) and enforce it in one layer.
  • Make Nginx proxy-aware only when necessary, and only with trusted forwarded headers.
  • Add a redirect-chain regression check to deployments: four entry points in, one canonical endpoint out.
  • Keep redirects boring: use return, avoid clever rewrites, and delete old migration rules once they’ve served their purpose.

If you do this once, properly, you’ll stop seeing case #71 show up during upgrades, CDN changes, or that Friday afternoon “small SEO tweak” that somehow always lands in production.

← Previous
Debian/Ubuntu Web Root Permissions: Stop 403s Without 777 (Case #69)
Next →
WordPress “Destination folder already exists”: fix installs without a wp-content mess

Leave a comment