Ubuntu 24.04 Apache vs Nginx confusion: fix port binding and proxy loops cleanly (case #94)

Was this helpful?

Pain: your server “worked yesterday,” you installed “just one thing,” and now port 80 is “already in use,” TLS redirects are spinning, or Nginx is proxying to… itself. Nothing is on fire, but your uptime graph is sweating.

Ubuntu 24.04 makes it easy to install both Apache and Nginx, sometimes without you realizing it. The trick is not “restart everything until it works.” The trick is to decide who owns which port, then make your proxies and headers tell the truth.

Fast diagnosis playbook

When Apache and Nginx are both present, the fastest route is to answer three questions in order. Don’t touch configs until you can answer them with evidence.

1) Who is actually listening on 80 and 443?

If you don’t know what process has the sockets, you’re guessing. Guessing is how you create “temporary fixes” that survive for years.

  • Check listeners with ss and map PIDs to services.
  • If you see apache2 and nginx both trying to own :80, you already found the first fault line.

2) Where does the traffic go after the first accept()?

Confirm whether Nginx is reverse proxying to Apache (common), Apache is proxying to something else, or both are proxying in a loop.

  • Use curl -v to see Location: headers, status codes, and whether HTTPS detection is wrong.
  • Inspect active virtual hosts on both servers (apachectl -S, nginx -T).

3) Are redirects and “scheme” decisions based on reality?

Most proxy loops are not “network issues.” They’re truth issues: the backend thinks the request is HTTP when the client came via HTTPS, so it keeps redirecting “to HTTPS” forever.

  • Fix headers: X-Forwarded-Proto, X-Forwarded-Host, and (if you must) Apache’s RemoteIP handling.
  • Fix backend port and scheme expectations (Apache vhost on 127.0.0.1:8080, not 80).

Decision gate: If you can’t describe the request path in one sentence (“Client hits Nginx :443, proxies to Apache :8080 over loopback, Apache serves app with correct scheme headers”), stop and map it. No changes until you can narrate it.

Interesting facts and context (why this keeps happening)

  1. Nginx was built for the C10k problem. Igor Sysoev’s event-driven design (early 2000s) became a default choice for high-concurrency frontends.
  2. Apache’s process/thread models evolved for different eras. Prefork, worker, and event MPMs exist because “one size fits all” never fit production.
  3. Ubuntu historically made Apache the default web server. That muscle memory persists; lots of packages assume Apache is present or can be enabled.
  4. Systemd introduced socket activation as a first-class citizen. It can hold a port open and start a service on demand, which confuses “who’s listening?” debugging if you don’t check unit types.
  5. Reverse proxy headers are a de facto standard, not a single spec. X-Forwarded-* is widely used but inconsistently interpreted by apps and frameworks.
  6. “Too many redirects” is usually deterministic. It feels random because caches and HSTS can amplify it, but the loop is repeatable with curl -v.
  7. Port 80 is special because everything wants it. Captive portals, ACME HTTP-01 challenges, default sites, and “temporary debug servers” all compete for it.
  8. Apache’s a2enmod / a2ensite culture is different from Nginx’s sites-enabled symlinks. Mixing mental models leads to “I changed a file, why didn’t it take effect?” moments.

The mental model: bind, accept, proxy, redirect

Web stack confusion is rarely about “which server is better.” It’s about ownership of sockets and clarity of intent.

Binding: only one process owns a TCP port

On a normal Linux host, 0.0.0.0:80 can be bound by exactly one process at a time. If you start Apache after Nginx is already bound to :80, Apache will fail (or vice versa). If systemd is holding the port for socket activation, both may “look like they’re starting” while one can’t actually receive traffic.

Accepting: the first server sets the tone

The server that accepts the connection controls TLS termination, HTTP/2 vs HTTP/1.1, access logs at the edge, and “real client IP” decisions. Everything behind it is a backend. Treat it like an edge tier, even if it’s on the same VM.

Proxying: one direction, no boomerangs

A reverse proxy should send traffic to a different port or a different destination. The easiest way to create an accidental loop is to proxy to a hostname that resolves back to the same listener (or to the same port through NAT rules). “Works on my laptop” becomes “melts in production” because DNS and /etc/hosts differ.

Redirecting: the app must know what the client saw

Backends often generate absolute redirects based on perceived scheme and host. If TLS terminates at Nginx but Apache sees plain HTTP on 127.0.0.1:8080, Apache (or your app) may insist “you should be HTTPS” and redirect to HTTPS. The browser comes back to Nginx HTTPS, which forwards HTTP again, and the backend redirects again. That’s your loop.

There are two clean truths you can teach your backend:

  • Tell it the original scheme via X-Forwarded-Proto: https.
  • Or make the backend actually speak HTTPS too (less common locally; usually unnecessary).

One quote, one rule of life: Andrew S. Grove’s “Only the paranoid survive” is a useful paraphrased idea for ops: assume your defaults will betray you unless verified.

Practical tasks (commands, expected output, decisions)

These are real tasks you can run on Ubuntu 24.04 to diagnose and fix port binding and proxy loops. Each task includes: command, output meaning, and the decision you should make.

Task 1: Identify who listens on 80/443 (the only starting point that matters)

cr0x@server:~$ sudo ss -ltnp '( sport = :80 or sport = :443 )'
State  Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0      511          0.0.0.0:80        0.0.0.0:*     users:(("nginx",pid=1274,fd=6))
LISTEN 0      511          0.0.0.0:443       0.0.0.0:*     users:(("nginx",pid=1274,fd=7))

What it means: Nginx owns both sockets. Apache is not the edge tier, even if it’s installed.

Decision: If you want Apache behind Nginx, move Apache off 80/443 (e.g., to 8080). If you want Apache as the only server, stop/disable Nginx.

Task 2: Check if Apache is failing due to bind errors

cr0x@server:~$ sudo journalctl -u apache2 -n 50 --no-pager
Aug 14 09:12:10 server apachectl[2219]: (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
Aug 14 09:12:10 server apachectl[2219]: AH00451: no listening sockets available, shutting down
Aug 14 09:12:10 server apachectl[2219]: AH00015: Unable to open logs

What it means: Apache is configured to listen on port 80 and can’t.

Decision: Either stop the other listener (often Nginx), or change Apache’s Listen directive(s) to a backend port.

Task 3: Confirm systemd unit states (enabled vs running vs socket-activated)

cr0x@server:~$ systemctl status nginx --no-pager
● nginx.service - A high performance web server and a reverse proxy server
     Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; preset: enabled)
     Active: active (running) since Wed 2025-08-14 09:10:22 UTC; 4min 18s
       Docs: man:nginx(8)
   Main PID: 1274 (nginx)
      Tasks: 3 (limit: 18931)
     Memory: 6.8M
        CPU: 78ms
cr0x@server:~$ systemctl status apache2 --no-pager
● apache2.service - The Apache HTTP Server
     Loaded: loaded (/usr/lib/systemd/system/apache2.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Wed 2025-08-14 09:12:10 UTC; 2min 30s ago
   Main PID: 2219 (code=exited, status=1/FAILURE)

What it means: Nginx is healthy; Apache is failing. This is not a “restart both” situation; it’s a design decision.

Decision: Choose a topology: (A) Nginx-only, (B) Apache-only, (C) Nginx edge + Apache backend. Then configure accordingly.

Task 4: Inventory Apache listening directives and enabled sites

cr0x@server:~$ grep -R --line-number -E '^\s*Listen\s+' /etc/apache2
/etc/apache2/ports.conf:5:Listen 80
/etc/apache2/ports.conf:9:Listen 443
cr0x@server:~$ sudo apachectl -S
VirtualHost configuration:
*:80                   is a NameVirtualHost
         default server 000-default (/etc/apache2/sites-enabled/000-default.conf:1)
         port 80 namevhost example.internal (/etc/apache2/sites-enabled/example.conf:1)
*:443                  example.internal (/etc/apache2/sites-enabled/example-le-ssl.conf:2)
ServerRoot: "/etc/apache2"
Main DocumentRoot: "/var/www/html"
Main ErrorLog: "/var/log/apache2/error.log"

What it means: Apache is configured to bind to 80 and 443. If Nginx is the edge, this must change.

Decision: Edit /etc/apache2/ports.conf and related vhosts to bind to 127.0.0.1:8080 (and possibly 8443 if needed). Or disable Apache vhosts entirely if it’s not used.

Task 5: Dump Nginx effective config to see real includes and upstreams

cr0x@server:~$ sudo nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
http {
    include /etc/nginx/mime.types;
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

What it means: You can’t trust “I edited one file” unless you know what’s included. nginx -T tells you the truth.

Decision: Locate where proxy_pass is defined and verify it points to a backend port that is not the same listener.

Task 6: Check for the classic self-proxy mistake (proxying to the same host/port)

cr0x@server:~$ sudo grep -R --line-number -E 'proxy_pass\s+http' /etc/nginx/sites-enabled /etc/nginx/conf.d
/etc/nginx/sites-enabled/example.conf:18:    proxy_pass http://127.0.0.1:80;

What it means: If Nginx listens on 80 and proxies to 127.0.0.1:80, it proxies to itself. That’s a loop that can look like a hang, a 502, or a CPU spike.

Decision: Change backend to a different port (e.g., Apache on 127.0.0.1:8080) or different destination entirely.

Task 7: Validate backend reachability (separate connectivity from HTTP behavior)

cr0x@server:~$ curl -sS -D- http://127.0.0.1:8080/ -o /dev/null
HTTP/1.1 200 OK
Date: Wed, 14 Aug 2025 09:17:02 GMT
Server: Apache/2.4.58 (Ubuntu)
Content-Type: text/html; charset=UTF-8

What it means: Apache is reachable on 8080 and serves content. Good. Now your proxy can point there.

Decision: If this fails, fix Apache first (ports, vhost, firewall, service state). Don’t “tune Nginx” when the backend is dead.

Task 8: Catch a redirect loop with headers (you can’t fix what you won’t observe)

cr0x@server:~$ curl -k -I https://example.internal/
HTTP/2 301
date: Wed, 14 Aug 2025 09:18:41 GMT
location: http://example.internal/
server: nginx
cr0x@server:~$ curl -I http://example.internal/
HTTP/1.1 301 Moved Permanently
Date: Wed, 14 Aug 2025 09:18:45 GMT
Location: https://example.internal/
Server: nginx

What it means: HTTPS redirects to HTTP, and HTTP redirects to HTTPS. That’s a two-node loop. Usually misconfigured redirect rules or conflicting configs in different vhosts.

Decision: Pick one canonical scheme (usually HTTPS) and enforce it in exactly one place (usually the edge server block). Remove competing redirects in the backend.

Task 9: Inspect headers Nginx forwards (scheme and host are the usual culprits)

cr0x@server:~$ sudo grep -n 'proxy_set_header' -n /etc/nginx/sites-enabled/example.conf
22:    proxy_set_header Host $host;
23:    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
24:    proxy_set_header X-Forwarded-Proto $scheme;

What it means: If X-Forwarded-Proto isn’t set, many apps will mis-detect HTTPS and generate wrong redirects or mixed content.

Decision: Ensure Host and X-Forwarded-Proto are set. If you terminate TLS at Nginx, $scheme should be https for client HTTPS requests.

Task 10: Verify Apache sees the forwarded scheme and doesn’t “upgrade” again

cr0x@server:~$ sudo grep -R --line-number -E 'RewriteRule|RewriteCond|HTTPS|X-Forwarded-Proto' /etc/apache2/sites-enabled
/etc/apache2/sites-enabled/example.conf:12:RewriteCond %{HTTPS} !=on
/etc/apache2/sites-enabled/example.conf:13:RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

What it means: Apache is doing an HTTPS redirect based on its own %{HTTPS} variable. Behind an SSL-terminating proxy, Apache will think HTTPS is off.

Decision: Remove this redirect from Apache when Nginx is the edge, or rewrite the condition to respect X-Forwarded-Proto so the backend doesn’t fight the frontend.

Task 11: Check for duplicate/default sites shadowing your intended host

cr0x@server:~$ ls -l /etc/nginx/sites-enabled
total 0
lrwxrwxrwx 1 root root 34 Aug 14 09:05 default -> /etc/nginx/sites-available/default
lrwxrwxrwx 1 root root 34 Aug 14 09:06 example.conf -> /etc/nginx/sites-available/example.conf

What it means: The default Nginx site can capture traffic you expected to go elsewhere, depending on server_name matching and default_server rules.

Decision: Disable the default site if it’s not needed, or ensure your intended site is marked default_server for the relevant listen lines.

Task 12: Confirm the “real” config is loaded and reload safely

cr0x@server:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
cr0x@server:~$ sudo systemctl reload nginx

What it means: Syntax is valid, and reload applies changes without dropping existing connections (most of the time, that’s what you want).

Decision: Prefer reload over restart for Nginx/Apache when applying config changes in production, unless you’re changing low-level modules or the service is wedged.

Task 13: Check DNS/hosts resolution to avoid “proxy to myself” by name

cr0x@server:~$ getent hosts example.internal
10.10.20.15    example.internal

What it means: If Nginx proxies to http://example.internal and that resolves to the server’s own public IP, you can hairpin back into Nginx instead of reaching Apache.

Decision: Proxy to 127.0.0.1 (or a dedicated backend IP) and a backend port. Avoid using the public hostname for local backends unless you explicitly want to traverse edge routing.

Task 14: Verify UFW isn’t “helping” by blocking your backend port

cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
OpenSSH                    ALLOW IN    Anywhere
80/tcp                     ALLOW IN    Anywhere
443/tcp                    ALLOW IN    Anywhere

What it means: Backend port 8080 is not allowed from the internet, which is good. But if Apache binds to 127.0.0.1:8080, UFW doesn’t matter for loopback anyway.

Decision: Bind backend services to loopback. Don’t open backend ports unless you truly need remote access.

Task 15: Confirm Apache is actually listening where you think after changes

cr0x@server:~$ sudo ss -ltnp | grep -E ':(80|8080|443|8443)\b'
LISTEN 0      511        127.0.0.1:8080    0.0.0.0:* users:(("apache2",pid=3012,fd=4))
LISTEN 0      511          0.0.0.0:80      0.0.0.0:* users:(("nginx",pid=1274,fd=6))
LISTEN 0      511          0.0.0.0:443     0.0.0.0:* users:(("nginx",pid=1274,fd=7))

What it means: This is the clean split you want: Nginx owns internet ports, Apache stays private on loopback.

Decision: Lock it in: ensure Apache isn’t configured to bind to public interfaces, and ensure no other service will grab 8080 unexpectedly.

Task 16: Look for 502/504 root causes in logs (don’t stare at the browser)

cr0x@server:~$ sudo tail -n 30 /var/log/nginx/error.log
2025/08/14 09:22:33 [error] 1274#1274: *118 connect() failed (111: Connection refused) while connecting to upstream, client: 10.10.20.33, server: example.internal, request: "GET / HTTP/2.0", upstream: "http://127.0.0.1:8080/", host: "example.internal"

What it means: Nginx can’t connect to the upstream; Apache is down, listening elsewhere, or blocked.

Decision: Fix the backend service state and binding, not Nginx timeouts. Connection refused is not a performance problem.

Short joke #1: If you “fixed” a port conflict by rebooting, congratulations—you taught the problem to hide better.

Clean fix patterns (pick one and commit)

You can run Apache and Nginx together. You can also run either alone. The failure mode comes from running both with overlapping responsibilities and no clear port ownership.

Pattern A: Nginx edge (80/443) + Apache backend (127.0.0.1:8080)

This is the most common “both installed” end state that stays sane. Nginx handles TLS, HTTP/2, static files if you want, buffering, and simple routing. Apache serves PHP apps (via PHP-FPM or mod_php) or legacy .htaccess-based setups. Apache never touches ports 80/443 on the public interface.

Apache: move off 80/443

Edit /etc/apache2/ports.conf to something like:

cr0x@server:~$ sudo sed -n '1,120p' /etc/apache2/ports.conf
# If you just change these ports or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen 127.0.0.1:8080

Then ensure your site vhost listens on 8080:

cr0x@server:~$ sudo grep -n '<VirtualHost' /etc/apache2/sites-enabled/example.conf
1:<VirtualHost 127.0.0.1:8080>

Decision: Binding explicitly to 127.0.0.1 prevents accidental exposure and eliminates “why is UFW blocking my backend?” arguments.

Nginx: proxy to Apache and set the right headers

A minimal, honest Nginx server block looks like this (conceptually):

  • proxy_pass http://127.0.0.1:8080;
  • Forward Host and scheme via X-Forwarded-Proto
  • Optionally set X-Forwarded-Host and X-Forwarded-Port

Decision: The goal is that Apache/app can reconstruct the external URL exactly.

Pattern B: Apache only (80/443) and remove Nginx from the path

If you’re not using Nginx features, don’t run it “just because.” Simpler stacks are easier to debug at 3 a.m.

Do this:

  • Stop and disable Nginx.
  • Ensure Apache binds to 80/443 and has the correct vhosts.

And do not do this:

  • Leave Nginx enabled “in case we need it later.” That’s how you get surprise port conflicts during package upgrades.

Pattern C: Nginx only and remove Apache from the path

If the “backend” is actually a modern app server (Gunicorn, uWSGI, Node, etc.), you may not need Apache at all. Nginx can reverse proxy directly to the app, and you avoid the extra hop.

Decision: If your only reason for Apache is “it came with the tutorial,” delete that tutorial from your brain and move on.

Proxy loops and redirect hell: how to spot and fix

There are two big classes of loops:

  • Network/proxy loops: proxying to yourself (same host/port), or a name that resolves back to the edge listener.
  • Redirect loops: backend keeps redirecting because it thinks scheme/host/port differ from what the client used.

Loop type 1: Nginx proxies to itself

Typical symptom: Requests hang, CPU climbs, Nginx error log shows upstream timeouts, or you see many internal connections to itself.

Root cause: proxy_pass http://127.0.0.1:80 while Nginx listens on 80; or proxy_pass http://example.internal and DNS resolves to the same IP Nginx listens on.

Fix: Ensure upstream is a different destination: loopback on a different port, a Unix socket, or a different host. Use explicit IP:port to avoid DNS surprises.

Loop type 2: HTTP ↔ HTTPS redirect ping-pong

Typical symptom: Browser says “too many redirects,” curl -I shows alternating Location: http:// and Location: https://.

Root cause: One layer forces HTTPS, another forces HTTP, or the backend forces HTTPS because it doesn’t understand it’s behind TLS termination.

Fix: Pick one enforcement point. Usually: Nginx forces HTTP→HTTPS, backend does not. Teach backend scheme via X-Forwarded-Proto and ensure app trusts it appropriately.

Loop type 3: Canonical host redirects fighting server_name

Typical symptom: You request www, it redirects to apex, then back to www.

Root cause: Two different configs each think they own canonicalization: Nginx rewrite + app-level canonical host setting.

Fix: Decide where canonical host lives. For most orgs: do it at the edge (Nginx) and turn off app canonical redirects unless required for multi-tenant routing.

Loop type 4: ACME/Let’s Encrypt challenge routes incorrectly

Typical symptom: Cert issuance fails, and the challenge endpoint returns a redirect or a 404 from the wrong server.

Root cause: You have both servers trying to serve /.well-known/acme-challenge/, or the proxy sends that path to a backend that redirects to HTTPS when the CA expects plain HTTP.

Fix: Serve ACME challenge directly on the edge HTTP vhost and bypass backend redirects for that path.

Short joke #2: A redirect loop is the web’s way of telling you, “I heard you like round trips.”

Three corporate mini-stories from the trenches

Mini-story 1: The incident caused by a wrong assumption

They migrated an internal dashboard to Ubuntu 24.04 on a fresh VM. The engineer doing the cutover installed “a web server” using muscle memory. Apache came first. Later that day, someone added Nginx because the security team wanted a specific TLS cipher suite profile they’d used elsewhere. Nobody removed Apache, because why would you? It wasn’t “hurting anything.”

During the change window, they updated DNS to the new VM. Traffic arrived, and the dashboard loaded… sometimes. Half the time, it served the Apache default page. The other half, it served the correct app. The team did what teams do under pressure: restarted services, flushed caches, and blamed DNS propagation like it was a weather event.

The root cause was painfully boring. Nginx was bound to 80/443, but Apache was also enabled and configured for 80/443. Apache couldn’t bind, so it failed. However, the app team kept testing via a local port-forward that hit Apache’s internal port from a previous config, and they assumed production traffic followed the same path. It didn’t.

The “fix” was to decide a topology and enforce it: Nginx as edge, Apache on 127.0.0.1:8080. Then they made the health check hit the edge endpoint, not the backend. The incident ended not with heroics, but with a shared understanding that assumptions are not architecture.

Mini-story 2: The optimization that backfired

A different company had a legacy PHP app behind Apache. Someone wanted better performance and read that Nginx is “faster.” They placed Nginx in front of Apache as a reverse proxy. Reasonable. Then they tried to optimize further: they enabled aggressive caching and added a set of rewrite rules to force HTTPS and canonical host at both layers “for safety.”

Within hours, users complained they couldn’t log in. Sessions were dropped, and the app randomly redirected between HTTP and HTTPS. The monitoring showed a jump in 301 responses and a suspiciously low rate of 200s. The Nginx cache was proudly serving cached redirects. That’s a special kind of self-own: you take a configuration mistake and make it faster.

Debugging took longer than it should have because the team stared at browser behavior. Once they switched to curl -v and compared headers from edge vs backend, it was obvious. Apache’s rewrite forced HTTPS because it didn’t know TLS was terminated at Nginx. Nginx also forced HTTPS, but with a slightly different canonical host rule. The client bounced between them.

The fix was to delete half the cleverness. HTTPS enforcement stayed at Nginx only. Apache stopped redirecting and instead trusted X-Forwarded-Proto for scheme logic where needed. Cache rules excluded redirects and any authenticated paths. Performance improved, but more importantly, behavior stabilized. The lesson: optimizing a misunderstanding just produces wrong answers faster.

Mini-story 3: The boring but correct practice that saved the day

A fintech team ran Nginx as edge and Apache as backend for a handful of older internal tools. Nothing exciting. The practice that mattered was their “port ownership contract”: a small internal runbook that stated, in plain language, which service binds to which ports on every host class.

One afternoon, a package update pulled in Apache on a host that previously only had Nginx. The update enabled Apache by default. On reboot, Apache tried to grab port 80 before Nginx started. Nginx failed to bind, and the host served Apache’s default page. That could have been a messy outage.

But the monitoring checks were built from the same contract. They didn’t just check “is port 80 open.” They checked that the edge response contained an expected header and that the server certificate matched the intended virtual host. The alert was specific: “edge identity mismatch,” not “website down.”

The on-call followed the runbook: check listeners, disable Apache, reload Nginx, confirm with curl. Total time to recovery was short, not because they were geniuses, but because they wrote down the boring truth and made monitoring test it.

Common mistakes: symptoms → root cause → fix

This section is deliberately specific. If you see a symptom, you should be able to jump to a likely cause and a concrete correction.

1) Apache won’t start: “Address already in use: AH00072”

  • Symptoms: systemctl status apache2 shows failed; journal shows bind error on 0.0.0.0:80 or :443.
  • Root cause: Nginx (or another service) already owns the port; or systemd socket activation is holding it.
  • Fix: Decide who owns 80/443. If Nginx is edge, move Apache to 127.0.0.1:8080 and update vhosts. If Apache is edge, stop/disable Nginx.

2) Nginx starts, but you get 502 Bad Gateway

  • Symptoms: Nginx returns 502; error log shows connect() failed (111: Connection refused) or upstream timed out.
  • Root cause: Upstream isn’t listening where Nginx expects; wrong port; Apache down; firewall irrelevant if it’s loopback but binding isn’t.
  • Fix: curl the upstream directly (127.0.0.1:8080). Fix Apache binding and service state. Then fix Nginx proxy_pass.

3) Browser: “Too many redirects”

  • Symptoms: Alternating 301/302; curl -I shows bouncing Location headers.
  • Root cause: HTTPS enforcement or canonical host rules duplicated across layers; backend doesn’t know original scheme.
  • Fix: Enforce redirects in one place (edge). Forward X-Forwarded-Proto. Remove backend redirect rules or make them conditional on forwarded scheme.

4) Nginx proxies to itself (silent loop)

  • Symptoms: Requests hang; high worker CPU; Nginx logs show upstream timeouts; upstream equals same host:port.
  • Root cause: proxy_pass points to 127.0.0.1:80 while Nginx listens on 80, or points to a hostname resolving to the same listener.
  • Fix: Change upstream to distinct backend port/IP. Prefer loopback IP:port or Unix socket for local upstreams.

5) Wrong site served (default page) instead of your app

  • Symptoms: You see “Apache2 Ubuntu Default Page” or Nginx welcome page unexpectedly.
  • Root cause: Default site is enabled and is the default vhost; server_name mismatch; SNI mismatch on 443.
  • Fix: Disable default sites you don’t use. Make your intended host match server_name and mark default_server deliberately if needed.

6) HTTP works, HTTPS breaks (or vice versa)

  • Symptoms: One scheme works; the other returns 404, 301 loops, or wrong backend.
  • Root cause: Different vhost/server blocks for 80 vs 443 not kept in sync; TLS vhost proxies to different upstream; certificate vhost mismatch.
  • Fix: Ensure both schemes route consistently. Use HTTP only for redirect to HTTPS at edge, then unify routing in the HTTPS server block.

Checklists / step-by-step plan

Checklist 1: Decide your topology (stop negotiating with the universe)

  1. Do you need Nginx features (HTTP/2 tuning, caching, simple routing, edge-level auth, rate limiting)? If yes, make it edge.
  2. Do you need Apache features (legacy .htaccess behavior, existing mod_* dependencies)? If yes, keep it as backend.
  3. If you need neither, run one server. Prefer less moving parts.

Checklist 2: Nginx edge + Apache backend (clean split)

  1. Ensure Nginx listens on 0.0.0.0:80 and 0.0.0.0:443.
  2. Move Apache to 127.0.0.1:8080 by editing /etc/apache2/ports.conf and vhosts.
  3. In Nginx, set:
    • proxy_set_header Host $host;
    • proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    • proxy_set_header X-Forwarded-Proto $scheme;
  4. Remove backend HTTPS redirects that use %{HTTPS} unless they’re rewritten to respect forwarded proto.
  5. Test backend directly with curl to 127.0.0.1:8080, then test via Nginx HTTPS.
  6. Reload services safely: nginx -t then systemctl reload nginx; apachectl configtest then systemctl reload apache2.

Checklist 3: Apache-only (straightforward and boring)

  1. Stop and disable Nginx: systemctl stop nginx, systemctl disable nginx.
  2. Ensure Apache listens on 80/443 and has correct vhosts.
  3. Confirm with ss -ltnp that Apache owns ports.
  4. Run apachectl -S and confirm the expected vhost is default for each port.

Checklist 4: Nginx-only (modern app backend)

  1. Stop and disable Apache if unused.
  2. Proxy to app server on a separate port/socket.
  3. Keep redirect enforcement at Nginx only.
  4. Ensure logs are configured so you can debug upstream failures without guessing.

FAQ

1) Can Apache and Nginx both listen on port 80 if one binds to IPv6 and the other to IPv4?

Sometimes, but you’re buying a weird edge case. Dual-stack binding rules can make this appear to work until it doesn’t. Pick one owner for 80/443.

2) Should I use 8080 or 8000 for Apache backend?

Use any unoccupied high port, but be consistent. 8080 is conventional, which helps future humans. Bind it to 127.0.0.1 unless you need remote access.

3) Why does my app keep redirecting to HTTP even though clients use HTTPS?

Because the backend sees a plain HTTP connection from the proxy and doesn’t know the client used HTTPS. Fix by forwarding X-Forwarded-Proto and configuring the app/framework to trust proxy headers.

4) What’s the quickest way to prove a self-proxy loop?

Look at proxy_pass and compare it to the listening sockets. If Nginx listens on :80 and proxies to 127.0.0.1:80, that’s the loop. Also watch Nginx error logs for upstream timeouts while the upstream is “itself.”

5) Why does disabling the default Nginx site matter?

Because default vhost selection is deterministic but not always intuitive under pressure. Leaving a default site enabled increases the chance that an unexpected hostname or IP request lands on the wrong content.

6) Is it okay to proxy to a hostname instead of 127.0.0.1?

It can be, but it’s riskier. DNS changes, split-horizon setups, and /etc/hosts hacks can turn “backend” into “front door” again. For local backends, prefer explicit loopback.

7) Do I need to open the backend port in UFW?

No, not if the backend binds to 127.0.0.1. Keep it private. If you bind to 0.0.0.0, you’ll be forced to manage firewall rules and you’ll eventually forget one.

8) Why do I see Apache headers even though Nginx is in front?

Because the backend sets Server: headers and Nginx passes them through by default. If you care, you can hide or override them, but don’t confuse headers with who owns the socket.

9) What about HTTP/2, HTTP/3, and TLS settings?

Those belong at the edge. If Nginx terminates TLS, tune TLS and HTTP/2 there. Keep the backend simple: stable HTTP/1.1 over loopback is fine and easy to debug.

10) How do I avoid this happening again after package upgrades?

Disable unused services, and monitor identity, not just availability. Confirm the right server responds on the right port with the right headers/cert. “Port open” is a low bar.

Conclusion: next steps you can do today

Apache vs Nginx “confusion” on Ubuntu 24.04 is usually not a mystery. It’s two services trying to be the same thing, on the same ports, with mismatched ideas about scheme and host. Fix it by being decisive and by checking the boring primitives: listeners, upstreams, headers, redirects.

  1. Run ss -ltnp and write down who owns 80/443.
  2. Choose one topology: Nginx-only, Apache-only, or Nginx edge + Apache backend.
  3. If you proxy, make upstreams unambiguous (127.0.0.1:8080) and forward X-Forwarded-Proto.
  4. Remove duplicate redirect logic across layers; enforce HTTPS once.
  5. Lock it in: disable unused default sites and unused services so upgrades can’t “helpfully” resurrect them.
← Previous
12VHPWR power drama: how one connector became a legend
Next →
HBM explained: why memory went vertical

Leave a comment