Ubuntu 24.04: PHP upload limits — fix upload_max_filesize where it actually matters (case #10)

Was this helpful?

Someone tries to upload a 40 MB PDF. The UI spins, then dies with a smug “413” or a vague “The uploaded file exceeds the upload_max_filesize directive.” You bump upload_max_filesize to 128M, reload something, and… nothing changes. Classic.

On Ubuntu 24.04, the problem is rarely “PHP has a limit.” The problem is “you changed the wrong limit, in the wrong place, for the wrong SAPI, behind a proxy that didn’t get the memo.” Let’s fix it where it actually matters.

The mental model: uploads are a relay race

File upload isn’t “a PHP thing.” It’s an HTTP request with a body. That body passes through a chain:

  1. Client (browser/mobile app) creates a multipart/form-data request.
  2. Any edge/WAF/CDN/reverse proxy enforces its own max request size and timeouts.
  3. Your load balancer or ingress does the same.
  4. Your web server (Nginx/Apache) decides whether to buffer, stream, or reject the body.
  5. PHP-FPM receives the request, writes temporary upload files, then the app reads/moves them.
  6. Your app/framework may enforce its own validation limit and reject it anyway.
  7. Storage (disk quota, permissions, filesystem full) can sabotage the temp file stage.

The limit you need to change is the first one in the chain that’s lower than your target. Changing a later one is like widening a highway exit ramp while the bridge before it is still one lane.

Joke #1: Raising upload_max_filesize without checking Nginx is like buying a bigger suitcase for a carry-on-only flight. You’ll still meet the gate agent.

Interesting facts and historical context

  • PHP’s upload limits were designed around shared hosting. Early PHP deployments assumed many unrelated users sharing one machine; conservative defaults helped prevent “one upload kills the box.”
  • post_max_size exists because uploads are part of POST bodies. Multipart uploads are still just POST data; PHP counts the whole request body, not just the file.
  • Nginx historically defaulted to 1 MB request bodies. That default has ruined more Friday releases than most people admit.
  • Apache has multiple knobs depending on modules. The request-body limit story changes with core settings, mod_security, and proxy modules.
  • 413 is not a “PHP error.” It’s an HTTP status meaning the server (or proxy) refuses the request entity/payload; PHP may never see it.
  • PHP uploads hit disk before your code sees them. The file lands in upload_tmp_dir (or system temp) first; if that filesystem is full, you get mysterious failures.
  • PHP-FPM introduced pool-level overrides to support multi-tenant setups. This is great until you forget a pool has php_admin_value that silently overrides ini files.
  • systemd changed the “reload” habits of many admins. On modern Ubuntu, “restart the daemon” is reliable; “reload the config” depends on the service and your patience.
  • Browsers and proxies have their own timeouts. Size limits are obvious; timeouts are sneaky. A slow uplink can turn a 100 MB upload into a timeout party.

Fast diagnosis playbook

First: identify who is rejecting the request

  • If the client gets 413 instantly, the rejection is almost certainly Nginx/Apache/proxy/WAF. PHP didn’t even get invited.
  • If you get a PHP warning about upload_max_filesize, PHP saw the request headers and decided the body is too large.
  • If you get a generic app error after waiting, suspect timeouts, temp disk full, or app validation.

Second: confirm which PHP SAPI and config you’re changing

  • CLI PHP config is irrelevant for web uploads unless you run uploads via CLI (you don’t).
  • For web: you’re on PHP-FPM (common) or Apache mod_php (less common on Ubuntu 24.04).
  • Confirm the active ini path, then confirm pool overrides.

Third: find the smallest limit in the chain

  • Edge/WAF/CDN limits (request body caps) and ingress rules.
  • Nginx: client_max_body_size; proxy buffering; timeouts.
  • Apache: request body and security module rules.
  • PHP: upload_max_filesize, post_max_size, memory_limit, timeouts, temp dir.
  • App: validation rules and framework defaults.
  • Disk: free space + inode availability + permissions.

Practical tasks (commands, outputs, decisions)

These are the things I actually run on Ubuntu 24.04 when uploads fail. Each task includes: command, what output means, and the decision you make.

Task 1: Verify the web stack (Nginx vs Apache) on the host

cr0x@server:~$ systemctl status nginx apache2 --no-pager
● nginx.service - A high performance web server and a reverse proxy server
     Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-12-30 09:12:31 UTC; 2h 7min ago

● apache2.service - The Apache HTTP Server
     Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled; preset: enabled)
     Active: inactive (dead)

Meaning: Nginx is in play; Apache is not. If you’re editing Apache config, you’re decorating the wrong building.

Decision: Focus on Nginx limits and PHP-FPM. Ignore Apache.

Task 2: Confirm PHP-FPM is used and which version

cr0x@server:~$ systemctl status php8.3-fpm --no-pager
● php8.3-fpm.service - The PHP 8.3 FastCGI Process Manager
     Loaded: loaded (/usr/lib/systemd/system/php8.3-fpm.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-12-30 09:10:07 UTC; 2h 10min ago

Meaning: PHP 8.3 FPM is serving web PHP. Ubuntu 24.04 ships with PHP 8.3 by default, so this checks out.

Decision: All PHP upload settings must be applied to the FPM config, not CLI.

Task 3: Find the active php.ini for PHP-FPM (not CLI)

cr0x@server:~$ php-fpm8.3 -i | grep -E 'Loaded Configuration File|Scan this dir for additional .ini files|upload_max_filesize|post_max_size' | head
Loaded Configuration File => /etc/php/8.3/fpm/php.ini
Scan this dir for additional .ini files => /etc/php/8.3/fpm/conf.d
upload_max_filesize => 2M => 2M
post_max_size => 8M => 8M

Meaning: The FPM SAPI loads /etc/php/8.3/fpm/php.ini. Defaults are tiny (2M/8M), which is why your PDF is crying.

Decision: Edit the FPM ini (or a conf.d override) and also raise post_max_size alongside upload_max_filesize.

Task 4: Confirm the CLI values are irrelevant (but useful for catching confusion)

cr0x@server:~$ php -i | grep -E 'Loaded Configuration File|upload_max_filesize|post_max_size' | head
Loaded Configuration File => /etc/php/8.3/cli/php.ini
upload_max_filesize => 128M => 128M
post_max_size => 128M => 128M

Meaning: CLI already has 128M, but FPM is still at 2M/8M. This is the most common “I swear I changed it” situation.

Decision: Stop editing CLI ini for web problems. Fix FPM.

Task 5: Check for pool-level overrides that silently win

cr0x@server:~$ grep -R --line-number -E 'php_(admin_)?value\[(upload_max_filesize|post_max_size)\]' /etc/php/8.3/fpm/pool.d
/etc/php/8.3/fpm/pool.d/www.conf:402:php_admin_value[upload_max_filesize] = 10M
/etc/php/8.3/fpm/pool.d/www.conf:403:php_admin_value[post_max_size] = 10M

Meaning: Even if you set 128M in php.ini, the pool forces 10M and you lose. php_admin_value is not negotiable at runtime.

Decision: Update the pool file or remove the override. Then restart FPM.

Task 6: Apply sane PHP settings (edit) and validate the syntax

cr0x@server:~$ sudo sed -i 's/^upload_max_filesize = .*/upload_max_filesize = 128M/; s/^post_max_size = .*/post_max_size = 128M/' /etc/php/8.3/fpm/php.ini
cr0x@server:~$ sudo php-fpm8.3 -tt
[30-Dec-2025 11:22:14] NOTICE: configuration file /etc/php/8.3/fpm/php-fpm.conf test is successful

Meaning: Config parses. If it doesn’t, don’t restart; fix it first.

Decision: Proceed to restart PHP-FPM.

Task 7: Restart PHP-FPM (don’t “reload and hope”)

cr0x@server:~$ sudo systemctl restart php8.3-fpm
cr0x@server:~$ systemctl is-active php8.3-fpm
active

Meaning: PHP-FPM is back up. Restart is blunt, but it’s deterministic.

Decision: Now verify the effective values from the running service.

Task 8: Confirm effective values via a local FastCGI request (no browser guesswork)

cr0x@server:~$ printf '%s\n' '<?php echo ini_get("upload_max_filesize"),"\n",ini_get("post_max_size"),"\n";' | sudo tee /var/www/html/_ini.php > /dev/null
cr0x@server:~$ curl -sS http://127.0.0.1/_ini.php
128M
128M

Meaning: The web-served PHP now reports 128M/128M. This is the truth that matters.

Decision: If uploads still fail, the bottleneck is upstream (Nginx/proxy) or downstream (temp disk, app).

Task 9: Check Nginx request body limit

cr0x@server:~$ sudo nginx -T 2>/dev/null | grep -R --line-number 'client_max_body_size' -n /dev/stdin | head
# (no output)

Meaning: No explicit setting found. Nginx will use its default, which is typically 1m.

Decision: Set client_max_body_size in the correct context (server or location) and reload Nginx.

Task 10: Add client_max_body_size and validate Nginx config

cr0x@server:~$ sudo tee /etc/nginx/conf.d/uploads.conf > /dev/null <<'EOF'
server {
    listen 80 default_server;
    server_name _;
    client_max_body_size 128m;
}
EOF
cr0x@server:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
cr0x@server:~$ sudo systemctl reload nginx
cr0x@server:~$ systemctl is-active nginx
active

Meaning: Nginx will accept bodies up to 128 MB for this server. Config loads.

Decision: Retest uploads. If your production vhost isn’t the default server, put the directive in the real vhost, not this example.

Task 11: Catch a proxy/CDN limit by reproducing and watching status codes

cr0x@server:~$ curl -sS -o /dev/null -w '%{http_code}\n' -F 'file=@/var/log/syslog' http://127.0.0.1/upload.php
200

Meaning: Locally it’s fine. If the same request through the public hostname returns 413, your edge/proxy is the limiter.

Decision: Compare local vs public path. If local works and public fails, stop touching PHP and fix the proxy/WAF/ingress.

Task 12: Inspect logs for the layer that’s complaining

cr0x@server:~$ sudo tail -n 30 /var/log/nginx/error.log
2025/12/30 11:40:01 [error] 24177#24177: *812 client intended to send too large body: 187654321 bytes, client: 203.0.113.44, server: example, request: "POST /upload.php HTTP/1.1", host: "example"

Meaning: Nginx rejected the body. That error line is basically a confession.

Decision: Fix Nginx config scope (wrong vhost/location), reload, and retry.

Task 13: Check PHP-FPM logs for “file too large” vs “no temp dir” vs timeouts

cr0x@server:~$ sudo tail -n 50 /var/log/php8.3-fpm.log
[30-Dec-2025 11:42:18] WARNING: [pool www] child 30112 said into stderr: "PHP Warning:  POST Content-Length of 187654321 bytes exceeds the limit of 134217728 bytes in Unknown on line 0"

Meaning: This is PHP enforcing post_max_size. It counts the whole body. Your “128M” might be too close once you add multipart overhead.

Decision: Set post_max_size higher than upload_max_filesize (common practice: +10–20%).

Task 14: Check temp storage space and inode health

cr0x@server:~$ df -h /tmp /var/tmp /var/lib/php/sessions
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           3.1G  3.0G   40M  99% /tmp
/dev/sda2        80G   52G   25G  68% /

Meaning: /tmp is tmpfs and nearly full. PHP uploads may write temp files there (depending on upload_tmp_dir), and “99%” is flirting with failure.

Decision: Free space, or move upload_tmp_dir to a real filesystem with headroom.

Task 15: Verify what PHP thinks the temp directory is

cr0x@server:~$ curl -sS http://127.0.0.1/_ini.php | cat
128M
128M
cr0x@server:~$ php-fpm8.3 -i | grep -E '^upload_tmp_dir|^sys_temp_dir' | head
upload_tmp_dir => no value => no value
sys_temp_dir => no value => no value

Meaning: With no explicit value, PHP uses the system temp directory (often /tmp). If /tmp is a constrained tmpfs, you get random-seeming upload failures.

Decision: Set upload_tmp_dir to a directory on disk with correct permissions.

Task 16: Set upload_tmp_dir and verify permissions

cr0x@server:~$ sudo install -d -o www-data -g www-data -m 1733 /var/tmp/php-uploads
cr0x@server:~$ sudo grep -n '^upload_tmp_dir' /etc/php/8.3/fpm/php.ini || true
cr0x@server:~$ echo 'upload_tmp_dir = /var/tmp/php-uploads' | sudo tee -a /etc/php/8.3/fpm/php.ini > /dev/null
cr0x@server:~$ sudo systemctl restart php8.3-fpm
cr0x@server:~$ sudo -u www-data bash -lc 'touch /var/tmp/php-uploads/.permtest && ls -la /var/tmp/php-uploads/.permtest'
-rw-r--r-- 1 www-data www-data 0 Dec 30 11:55 /var/tmp/php-uploads/.permtest

Meaning: Directory exists, has sticky bit-like semantics (1733), and the FPM user can write there.

Decision: This removes the tmpfs bottleneck. If failures persist, look at timeouts or app validation.

Where PHP values really come from (precedence that bites)

When someone says “I set it in php.ini,” my first question is: “Which php.ini, for which SAPI, and did something override it?” PHP configuration is layered, and Ubuntu makes it easy to change the wrong layer.

The order of power (from “wins most” to “wins least”)

  1. PHP-FPM pool config via php_admin_value[] and php_value[] in /etc/php/8.3/fpm/pool.d/*.conf.
  2. Per-directory config (only in some SAPIs): .user.ini for PHP-FPM, .htaccess for Apache mod_php. (Yes, you can have both in a migration mess.)
  3. Additional ini files in /etc/php/8.3/fpm/conf.d/ loaded in numeric order.
  4. Main php.ini for that SAPI: /etc/php/8.3/fpm/php.ini.
  5. Compiled defaults (the values you get when you configure nothing, which is what staging often accidentally does).

Why php_admin_value is a trap

php_admin_value is intentionally “admin only.” It cannot be overridden by application code, and it will override your ini settings. That’s fantastic for multi-tenant hosting and for preventing a single app from turning itself into a memory hog. It’s also how you end up with one pool stuck at 10M while everyone argues about why php.ini changes “don’t work.”

What to do on Ubuntu 24.04

Pick one strategy and be consistent:

  • Single app server: put global upload settings in /etc/php/8.3/fpm/php.ini or a dedicated /etc/php/8.3/fpm/conf.d/99-uploads.ini. Avoid pool overrides unless you have a reason.
  • Multi-app host: keep php.ini conservative and set per-pool limits in each pool.d file. Document them. Seriously. Future-you will forget.

One quote worth keeping in your head when touching production knobs: “Hope is not a strategy.” — General Gordon R. Sullivan. It applies uncomfortably well to “I reloaded and hoped it took.”

Nginx/Apache/proxies: the other limits

Uploads die upstream more often than they die in PHP. That’s not because PHP is flawless; it’s because rejecting an oversized body early is cheaper than buffering it and then telling PHP to deal with it.

Nginx: client_max_body_size and where to set it

client_max_body_size can be set in http, server, or location context. “I set it in nginx.conf” is not enough information. If you set it in a file that isn’t included, or in a server block that isn’t used, nothing changes.

Also, if you use multiple server blocks (redirect HTTP to HTTPS, separate internal/external vhosts), you must set it where the upload endpoint is actually served.

Apache: request size limits are a hydra

If you’re on Apache, you might be dealing with:

  • core request limits (varies by version/module usage)
  • mod_security rules that cap request body size or reject multipart patterns
  • proxy modules when Apache fronts PHP-FPM

In practice, Apache shops often find the limit lives in security middleware rather than Apache itself.

Reverse proxies and CDNs: the silent “no”

Even when your origin is correct, an edge can refuse a large body:

  • Managed WAF policies may cap request bodies.
  • Ingress controllers in container platforms frequently have default caps.
  • Load balancers may impose max header size and body size limits.

The symptom is usually: local curl to localhost works; external upload fails with 413 or a vendor-branded error page. Your origin logs are clean because the request never arrived.

PHP-FPM specifics on Ubuntu 24.04

The four PHP knobs that matter for uploads

  • upload_max_filesize: max size of an individual uploaded file.
  • post_max_size: max size of the entire POST body. Must be >= upload size plus overhead.
  • memory_limit: does not limit upload size directly, but many apps read uploads into memory or process them (image resizing) and then this becomes the real ceiling.
  • max_input_time / max_execution_time: large uploads on slow links can hit these. Also watch Nginx/FPM timeouts.

How big should you set them?

Be deliberate. “Set everything to 2G” is not engineering; it’s surrender.

  • Set upload_max_filesize to your business requirement (example: 128M).
  • Set post_max_size slightly higher (example: 140M–160M) because multipart encoding adds overhead and forms may include fields.
  • Set web server/proxy caps slightly above post_max_size.
  • Make sure temp storage can absorb at least a few concurrent uploads at that size.

Timeout alignment (the slow-upload reality)

If your users are uploading from a hotel Wi‑Fi network, the upload might take minutes. That’s normal. Your stack must agree to wait that long.

  • Nginx: client_body_timeout, proxy_read_timeout (if proxying), and buffering settings.
  • PHP-FPM: request termination timeouts (pool settings can kill “slow requests”).
  • App: background processing vs synchronous processing.

Joke #2: Upload limits are the only place where “just one more megabyte” can cause a production incident. It’s a diet plan for servers, and they cheat too.

Application-layer traps (frameworks and CMS)

Once the request body survives the gauntlet, your application can still reject the file. This is where teams waste time because the error message looks like “server problem” even though it’s business logic.

Validation rules that override infrastructure

Examples you’ll see:

  • Laravel validation: max: rules in kilobytes.
  • Symfony upload constraints set to a conservative default in one form type.
  • WordPress: UI “Maximum upload file size” depends on PHP values, but plugins can enforce stricter rules.
  • Custom apps: a config file caps file size “for safety” and nobody updated it.

Memory blowups during post-upload processing

Your upload limit might be 128M, but your app might then:

  • read the entire file into memory (file_get_contents on a 128M file is a statement)
  • transcode or resize images (temporary memory spikes)
  • hash the file in PHP userland

That’s when the real error is 500, “Allowed memory size exhausted,” or a worker dying. The fix is not “increase upload_max_filesize again.” The fix is streaming, chunking, or moving processing out of the request path.

Three corporate mini-stories from the upload mines

1) Incident caused by a wrong assumption: “We changed php.ini, so it’s fixed”

A mid-sized internal platform needed to accept larger invoice attachments. The team updated /etc/php/8.3/cli/php.ini on a couple of servers, tested with a CLI script that parsed files, and called it done. The next day, the web UI still refused anything over 10 MB.

Operations got paged because the error rate spiked. The app logs showed nothing useful. The incident channel filled with the usual noise: “Maybe it’s the database,” “Maybe it’s the storage,” “Maybe the load balancer is unhappy.” Nobody wanted to say the obvious: the change might not have applied to the runtime that matters.

The actual issue was twofold. First, PHP-FPM was loading /etc/php/8.3/fpm/php.ini, which hadn’t been touched. Second, the FPM pool had php_admin_value[post_max_size] set to 10M from an old “protect the host” hardening sprint.

The fix took minutes once someone stopped guessing: confirm the SAPI, grep pool overrides, raise the limits, restart FPM, then verify with a web-served endpoint. The lesson wasn’t “be careful.” It was “validate changes at the layer users hit, not the layer you prefer to test.”

2) Optimization that backfired: “Let’s buffer less to improve performance”

A team running Nginx in front of PHP-FPM tried to reduce disk I/O by changing request buffering behavior. Their thought process was reasonable: large uploads were writing to disk, then PHP would read them again. Double the I/O, half the fun.

They changed buffering settings and some timeouts to “speed things up.” In testing on a fast LAN, everything looked great. In production, users on slower connections saw intermittent failures: uploads would die around the 30–60 second mark. Meanwhile, the backend saw spikes of half-open connections and FPM workers stuck waiting.

The optimization backfired because it shifted pressure upstream. Instead of safely buffering bodies (with predictable resource usage), the origin held open connections longer. Timeouts became the limiting factor, and the concurrency model got ugly: fewer completed requests, more in-flight work, and a feedback loop where retries made it worse.

The fix was boring: re-enable sane buffering for upload endpoints, set explicit size caps, align timeouts to real user conditions, and instrument “request aborted” counters. They still improved performance later, but by changing the architecture (direct-to-object-storage uploads) rather than trying to outsmart physics at the proxy.

3) Boring but correct practice that saved the day: “Prove it with a local reproduction and layer-by-layer checks”

A regulated enterprise app started failing uploads after a routine Ubuntu 24.04 host rebuild. Same playbook, same Ansible, same configs—supposedly. The helpdesk reported “uploads over 5 MB fail,” which is not a diagnosis, it’s a distress signal.

The on-call engineer didn’t touch any config at first. They reproduced the issue with a local curl to 127.0.0.1 and saw success. Then they repeated the same request through the public hostname and got a 413. That one split the universe into two halves: origin works, edge rejects.

Next, they checked the ingress controller settings and found a default request size cap that had changed with a chart update. Nobody had pinned it, because “defaults are fine” until they aren’t. The engineer raised the cap to match the origin’s Nginx and PHP limits, deployed, and verified with the same curl command.

No heroics. No midnight refactor. Just a disciplined approach: reproduce locally, compare paths, identify the first rejecting layer. Boring is good. Boring scales.

Common mistakes (symptoms → root cause → fix)

1) Symptom: 413 “Request Entity Too Large” immediately

Root cause: Nginx/Apache/proxy/WAF rejected the request body before PHP.

Fix: Set request body caps at the rejecting layer (e.g., Nginx client_max_body_size). Verify with local vs public curl, and check web server error logs for “intended to send too large body.”

2) Symptom: “exceeds upload_max_filesize directive” in the browser/app

Root cause: PHP limit is lower than file size; often you changed the CLI ini, not FPM.

Fix: Confirm FPM ini path with php-fpm8.3 -i, update upload_max_filesize and post_max_size, restart FPM, validate via a web-served ini_get endpoint.

3) Symptom: Upload starts, then fails after some time with 504/timeout

Root cause: Timeouts in proxy/web server/FPM; slow clients exceed client_body_timeout or upstream timeouts.

Fix: Align timeouts across layers for expected upload durations; consider dedicated upload endpoints with longer timeouts, or direct-to-object-storage uploads.

4) Symptom: Random failures under load; small files work, big ones sometimes work

Root cause: Temp filesystem pressure (tmpfs full), inode exhaustion, or disk I/O contention. PHP writes temp files first.

Fix: Monitor /tmp usage, set upload_tmp_dir to a roomy filesystem, and keep concurrency in mind (N concurrent uploads * size).

5) Symptom: PHP logs show POST Content-Length exceeds limit even though upload_max_filesize is high

Root cause: post_max_size is lower than the total multipart request size.

Fix: Set post_max_size higher than upload_max_filesize; leave headroom for multipart overhead and other fields.

6) Symptom: 500 errors when uploading large images/videos

Root cause: App post-processing hits memory_limit or execution time. Upload succeeded; processing failed.

Fix: Stream processing, queue heavy work, increase memory where justified, and cap sizes in the app to match processing capability.

7) Symptom: “It works on one server but not another”

Root cause: Per-pool overrides, different vhost config scope, missing include files, or a different proxy path.

Fix: Compare effective config outputs: nginx -T, FPM pool grep for php_admin_value, and a web-served ini dump.

Checklists / step-by-step plan

Step-by-step: raise upload limits safely on Ubuntu 24.04 (Nginx + PHP-FPM)

  1. Pick a target: e.g., max file 128M. Decide based on product needs, not vibes.
  2. Set PHP limits:
    • upload_max_filesize = 128M
    • post_max_size = 160M (headroom)
  3. Check pool overrides: remove or raise php_admin_value in the pool config.
  4. Set Nginx limit: client_max_body_size 160m; in the correct server/location.
  5. Check temp storage: ensure the upload temp dir can handle peak concurrent uploads.
  6. Restart services: restart PHP-FPM; reload Nginx after nginx -t.
  7. Verify from the web path: curl the app endpoint or a small diagnostic script.
  8. Verify from the public path: if you have an edge/proxy, test through it too.
  9. Instrument and log: keep an eye on 413 counts, upstream timeouts, and disk usage.
  10. Document the chosen limits: write down where they are set (ini, pool, vhost, proxy). Future-you is not psychic.

Checklist: when you should NOT raise limits

  • You can’t articulate who needs larger uploads and why.
  • You don’t have disk headroom for temp files.
  • Your app reads the whole file into memory and you don’t want to fix that yet.
  • You’re behind an edge with a hard cap you can’t change (you’ll just create inconsistent behavior).

Checklist: operational guardrails

  • Set upper bounds per endpoint. Not every POST should accept 160 MB.
  • Use separate server/location blocks for upload endpoints with tailored limits/timeouts.
  • Alert on 413 rate and temp filesystem utilization.
  • Track upload durations (p95/p99). Slow uploads are where timeout bugs breed.

FAQ

1) I changed upload_max_filesize but phpinfo() still shows the old value. Why?

You likely changed the CLI ini or the wrong FPM ini, or a pool override is winning. Verify with php-fpm8.3 -i and grep pool.d for php_admin_value.

2) Do I need to set both upload_max_filesize and post_max_size?

Yes. Multipart uploads are POST bodies. If post_max_size is smaller, PHP rejects the request before the file limit even matters.

3) What value should post_max_size be relative to upload_max_filesize?

Bigger. Give it headroom (10–25% is common) because multipart boundaries and additional fields increase total body size.

4) Why do I get 413 even though PHP is set correctly?

Because 413 is usually emitted by Nginx/Apache/proxy/WAF. PHP never sees the request. Check the rejecting layer’s logs and its request body size directive.

5) Is memory_limit required to be larger than upload_max_filesize?

Not strictly for the upload itself, because PHP stores uploads as temp files. But your application may read/process the file in memory. If it does, memory becomes the real limit.

6) Why do uploads fail only for some users?

Slow connections trigger timeouts; corporate proxies may have their own size caps; and mobile networks are great at turning “should be fine” into “why is this flaky.” Align timeouts and test with throttled networks.

7) Can I set upload limits per site on the same server?

Yes. Use separate PHP-FPM pools with per-pool php_admin_value or separate Nginx server blocks with different client_max_body_size. Be explicit and document it.

8) Should I rely on .user.ini to set upload limits?

Only if you need per-directory control and you accept the operational ambiguity. Central configuration (FPM pool or ini) is easier to audit and less surprising during incident response.

9) My uploads fail with “No such file or directory” in PHP. What gives?

Often it’s the temp dir: missing, wrong permissions, or full. Set upload_tmp_dir to a directory writable by the FPM user and ensure there’s space.

10) Should we accept huge uploads through PHP at all?

Sometimes yes (internal tools, low volume). For high-volume or very large files, consider direct uploads to object storage with signed URLs, then process asynchronously.

Next steps you can actually do today

  1. Measure the failure: reproduce with curl locally and through the public hostname. Identify whether the origin sees the request.
  2. Lock down the effective config: confirm FPM values via a web endpoint, and confirm Nginx config via nginx -T.
  3. Set aligned limits: Nginx/proxy cap ≥ PHP post_max_size > PHP upload_max_filesize.
  4. Fix temp storage: stop writing large temp files to a nearly-full tmpfs unless you enjoy intermittent failures.
  5. Decide on architecture: if uploads are business-critical and large, plan a direct-to-storage pattern and keep PHP out of the hot path.

If you do only one thing: stop guessing which ini file is active, and prove it from the web-served runtime. Upload bugs thrive in ambiguity. Kill the ambiguity.

← Previous
Docker Env Var Precedence: Why Your Config Is Never What You Think
Next →
286 explained: protected mode that saved PCs—and tortured developers

Leave a comment