You click Add New in the Media Library. The spinner spins. Then WordPress shrugs: “HTTP error,” “Unable to create directory,” or a silent fail where nothing happens but your blood pressure rises.
This is one of those problems that feels like “WordPress being WordPress” until you remember it’s actually a stack: browser → CDN/WAF → web server → PHP-FPM → filesystem → image libraries → database. The trick is to find the first layer that’s lying to you.
Fast diagnosis playbook (do this in order)
1) Confirm the exact error at the edge and at the origin
- Browser devtools Network tab: look for 413, 403, 500, 502, 504.
- Origin logs: Nginx/Apache + PHP-FPM logs at the same timestamp.
- WordPress’s message is frequently vague. Your logs aren’t.
2) Prove the uploads directory is writable by the runtime user
- Check ownership and mode of
wp-content/uploads. - Try creating a file as the web server user. Don’t guess.
3) Check size limits across the chain (smallest wins)
- PHP:
upload_max_filesize,post_max_size,memory_limit,max_execution_time. - Web server: Nginx
client_max_body_size; ApacheLimitRequestBody. - Proxies/CDN/WAF: 413, request body limits, security rules.
4) Check temp directory health
/tmppermissions, space, inode availability, mount options.- PHP’s
upload_tmp_dirand session paths.
5) Check image processing libraries
- Missing Imagick/GD can turn valid uploads into “HTTP error” during thumbnail generation.
- Look for fatal errors in PHP logs.
6) Security enforcement (only after basics)
- SELinux/AppArmor denies can look like permissions issues, but
chmod 777won’t fix policy. - ModSecurity can block multipart uploads with generic rules.
Rule of thumb: if uploads fail instantly, suspect limits/WAF. If it waits then fails, suspect PHP processing, temp space, libraries, timeouts.
What’s really happening when uploads fail
WordPress uploads aren’t just “copy file to disk.” A typical path looks like this:
- Browser submits a multipart POST to
/wp-admin/async-upload.php(or via REST endpoints depending on version/editor path). - Edge/CDN/WAF inspects request size and content type, may terminate early.
- Web server accepts request body; may enforce its own max body size and timeouts.
- PHP receives the file into a temp location (
$_FILES), then moves it towp-content/uploads/YYYY/MM. - WordPress creates attachment metadata, generates thumbnails, and may run Imagick/GD operations.
- Database gets rows for attachment post + metadata; filesystem gets the original and derived sizes.
That’s a lot of moving parts. When one breaks, WordPress often reports it like a fortune cookie: vague but technically not wrong.
Two practical observations:
- If the file never lands in uploads, look earlier: limits, permissions, temp, WAF.
- If the original lands but thumbnails don’t, look at Imagick/GD, memory limits, or timeouts.
One paraphrased idea (reliability mindset): Eugene Wigner’s “unreasonable effectiveness” vibe applies to ops too—instrumentation makes messy systems suddenly understandable (paraphrased idea).
Joke #1: The Media Library error message is like a pager alert that says “something happened.” Thanks, WordPress. Very actionable.
Interesting facts & context (the stuff that explains the weirdness)
- PHP’s upload pipeline is old-school: uploads are staged to a temp file first, then moved. Temp space is a real dependency, not an implementation detail.
- “HTTP error” became a catch-all: historically, WordPress sometimes surfaced generic upload failures when the real exception was in thumbnail generation, not the HTTP layer.
- Imagick’s defaults differ by distro: ImageMagick policy files can restrict memory, disk, or certain formats; the same WordPress install can behave differently across hosts.
- GD vs Imagick isn’t just preference: GD can be faster for small operations, Imagick often handles more formats and better quality but can be heavier and more sensitive to policy limits.
- Nginx’s request size limit is explicit: unlike some systems, Nginx’s
client_max_body_sizeis a hard gate that can fail fast with 413. - Apache can hide the limit in many places:
LimitRequestBodymay live in virtual host, directory context, or global config; “but I changed it” is not proof. - Cloud and container filesystems change failure modes: overlay filesystems and ephemeral disks make “uploads work until redeploy” a surprisingly common incident pattern.
- EXIF orientation changed the game: some workflows rotate images during processing; missing EXIF support can lead to “works on my laptop” inconsistencies.
Practical tasks: commands, outputs, decisions
These are “do this now” tasks. Each one includes: a command, realistic output, and the decision you make. Run them on the WordPress host (or container) that actually handles PHP.
Task 1: Identify the web server and PHP runtime user
cr0x@server:~$ ps -eo user,comm | egrep 'nginx|apache2|httpd|php-fpm' | head
root nginx
www-data nginx
root php-fpm8.2
www-data php-fpm8.2
What it means: Nginx and PHP-FPM are running, and the worker user is www-data.
Decision: all filesystem write tests should be performed as www-data. If this is a cPanel host you might see nobody or per-vhost users; don’t assume.
Task 2: Confirm WordPress path and locate uploads directory
cr0x@server:~$ sudo find /var/www -maxdepth 4 -type f -name wp-config.php 2>/dev/null
/var/www/site/wp-config.php
What it means: WordPress root is likely /var/www/site.
Decision: focus checks on /var/www/site/wp-content/uploads unless a custom UPLOADS constant is set.
Task 3: Check ownership and permissions on uploads
cr0x@server:~$ ls -ld /var/www/site/wp-content/uploads
drwxr-xr-x 12 root root 4096 Dec 27 09:40 /var/www/site/wp-content/uploads
What it means: uploads is owned by root:root. That’s a classic “it worked during deploy as root, then fails in production” setup.
Decision: change ownership to the runtime user/group (or to a deploy group that includes it). Avoid 777 as a “fix.”
Task 4: Prove writability by creating a file as the runtime user
cr0x@server:~$ sudo -u www-data bash -lc 'touch /var/www/site/wp-content/uploads/.writetest && echo ok'
touch: cannot touch '/var/www/site/wp-content/uploads/.writetest': Permission denied
What it means: PHP can’t write there either.
Decision: fix ownership/ACLs; don’t touch PHP limits yet.
Task 5: Fix ownership safely (one-liner, then re-test)
cr0x@server:~$ sudo chown -R www-data:www-data /var/www/site/wp-content/uploads
cr0x@server:~$ sudo -u www-data bash -lc 'touch /var/www/site/wp-content/uploads/.writetest && echo ok'
ok
What it means: runtime can write now.
Decision: retry upload in WordPress. If still failing, move to limits/logs.
Task 6: Check disk space (yes, really)
cr0x@server:~$ df -h /var/www/site/wp-content/uploads /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p2 40G 39G 220M 99% /
tmpfs 2.0G 1.9G 120M 95% /tmp
What it means: both root filesystem and /tmp are nearly full. Uploads may fail to stage or finalize.
Decision: free space immediately or expand volumes; also clean temp build-ups. If you only fix uploads dir perms, it will still fail intermittently.
Task 7: Check inode exhaustion (the “disk is full but df lies” cousin)
cr0x@server:~$ df -i /var/www/site/wp-content/uploads /tmp
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme0n1p2 2621440 2621201 239 100% /
tmpfs 524288 523901 387 100% /tmp
What it means: you ran out of inodes. Small-file storms (cache, sessions, temp files) cause this.
Decision: clean up high-churn directories (cache/session/tmp). Long term: move caches off root FS, adjust retention, or resize filesystem with more inodes.
Task 8: Inspect PHP limits actually in effect
cr0x@server:~$ php -i | egrep 'upload_max_filesize|post_max_size|memory_limit|max_execution_time|upload_tmp_dir'
upload_max_filesize => 2M => 2M
post_max_size => 8M => 8M
memory_limit => 128M => 128M
max_execution_time => 30 => 30
upload_tmp_dir => no value => no value
What it means: uploads above 2 MB will fail; also WordPress often needs more memory to process large JPEGs, and default temp dir is used.
Decision: set upload_max_filesize and post_max_size to match your needs, and consider increasing memory_limit if image processing fails.
Task 9: Verify Nginx request size limit
cr0x@server:~$ sudo nginx -T 2>/dev/null | egrep -n 'client_max_body_size'
45: client_max_body_size 2m;
What it means: Nginx blocks bodies over 2 MB before PHP sees them.
Decision: raise client_max_body_size in the correct context (http/server/location) and reload Nginx.
Task 10: Reload Nginx and confirm configuration is valid
cr0x@server:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
cr0x@server:~$ sudo systemctl reload nginx
What it means: changes are live without dropping connections.
Decision: retry upload with a file slightly above previous limit to confirm the gate moved.
Task 11: Check PHP-FPM pool config for per-pool overrides
cr0x@server:~$ sudo egrep -R 'php_admin_value\[upload_max_filesize\]|php_admin_value\[post_max_size\]|php_admin_value\[memory_limit\]' /etc/php/8.2/fpm/pool.d
/etc/php/8.2/fpm/pool.d/www.conf:php_admin_value[upload_max_filesize] = 2M
/etc/php/8.2/fpm/pool.d/www.conf:php_admin_value[post_max_size] = 8M
What it means: you may edit php.ini forever and nothing changes because the pool overrides it.
Decision: update the pool settings and restart PHP-FPM.
Task 12: Restart PHP-FPM and watch for errors
cr0x@server:~$ sudo systemctl restart php8.2-fpm
cr0x@server:~$ sudo systemctl status php8.2-fpm --no-pager -l | sed -n '1,12p'
● php8.2-fpm.service - The PHP 8.2 FastCGI Process Manager
Loaded: loaded (/lib/systemd/system/php8.2-fpm.service; enabled)
Active: active (running) since Fri 2025-12-27 09:52:21 UTC; 2s ago
Docs: man:php-fpm8.2(8)
What it means: service is healthy post-change.
Decision: if it fails to start, your config change is wrong; revert and re-apply carefully.
Task 13: Tail web server error logs while reproducing the issue
cr0x@server:~$ sudo tail -f /var/log/nginx/error.log
2025/12/27 09:54:10 [error] 1241#1241: *392 client intended to send too large body: 7340032 bytes, client: 203.0.113.10, server: site.example, request: "POST /wp-admin/async-upload.php HTTP/1.1", host: "site.example"
What it means: it’s definitively Nginx request size, not WordPress.
Decision: increase client_max_body_size and confirm you edited the correct server block (not a default you don’t use).
Task 14: Tail PHP-FPM logs for fatal processing failures
cr0x@server:~$ sudo tail -n 30 /var/log/php8.2-fpm.log
[27-Dec-2025 09:55:37] WARNING: [pool www] child 2219 said into stderr: "PHP Fatal error: Uncaught Error: Call to undefined function imagecreatefromjpeg() in /var/www/site/wp-includes/class-wp-image-editor-gd.php:92"
What it means: GD functions are missing; thumbnail generation fails after upload, often shown as “HTTP error.”
Decision: install/enable the relevant PHP extension (like php-gd) and restart PHP-FPM.
Task 15: Verify installed PHP extensions (GD, Imagick)
cr0x@server:~$ php -m | egrep -i 'gd|imagick|exif'
exif
What it means: EXIF is present; GD and Imagick aren’t.
Decision: install one image library (GD is simplest; Imagick is powerful but needs ImageMagick too). Don’t run without either.
Task 16: Check SELinux enforcement and recent denies
cr0x@server:~$ getenforce
Enforcing
cr0x@server:~$ sudo ausearch -m avc -ts recent | tail -n 3
type=AVC msg=audit(1735293382.612:1187): avc: denied { write } for pid=2311 comm="php-fpm" name="uploads" dev="dm-0" ino=420112 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:default_t:s0 tclass=dir permissive=0
What it means: SELinux blocked write access; permissions might be fine, labels aren’t.
Decision: fix contexts (e.g., set proper httpd_sys_rw_content_t on uploads) instead of weakening SELinux.
Permissions and ownership: the #1 cause
If you run one check, make it this: can the PHP runtime user create files in wp-content/uploads?
What “correct” looks like
- Uploads directory owned by the web runtime user/group (
www-dataon Debian/Ubuntu;apacheon many RHEL-based systems), or writable via group/ACL. - Directory permissions typically
0755or0775. Files typically0644. - If you use a deploy user and keep runtime read-only for code, then uploads should be a separate writable path with explicit permissions.
What not to do
Don’t “fix” it with chmod -R 777 wp-content/uploads. You’re trading an outage for a security incident. In multi-tenant hosting, it’s a gift-wrapped lateral movement path.
When ownership flips back to root
This happens when:
- Deployment scripts run as root and copy an
uploadsdirectory tree with root ownership. - Backups restore with wrong uid/gid mapping (especially across hosts).
- Containers rebuild volumes and change permissions on mount.
Policy recommendation: treat uploads as persistent data, not code. Manage it like data. Back it up, mount it, label it, monitor it.
Size limits: PHP, web server, proxies, and CDNs
Uploads fail at the smallest limit in the chain. You only need one miserly component to ruin everyone’s day.
PHP limits that matter
upload_max_filesize: max size of a single uploaded file.post_max_size: max size of the entire POST body (must be >= upload_max_filesize, and realistically larger for multipart overhead).memory_limit: image processing can require multiples of the compressed file size. A 12MB JPEG can expand massively in memory.max_execution_time: slow storage or CPU-bound transforms can hit timeouts.max_input_time: can matter for slow uploads.
Nginx and Apache gates
- Nginx
client_max_body_sizeis the most common 413 cause. - Apache
LimitRequestBodyand proxy settings can enforce limits; also check timeouts when PHP is behind a proxy module.
Proxies and CDNs
Even if your origin is perfect, a CDN/WAF can reject large POSTs or multipart bodies, or enforce plan-specific limits. If uploads fail only through the public hostname but succeed when hitting origin directly (test from a private network), your edge is the culprit.
Joke #2: Somewhere in the stack there’s always a 2MB limit from 2009, quietly living its best life.
Temp directories and inode/disk exhaustion
Uploads go to a temp directory first. If temp is full, mis-permissioned, mounted noexec in weird ways, or is an undersized tmpfs, you’ll see failures that look like permissions errors, HTTP errors, or timeouts.
Typical failure signatures
- Small files succeed, large files fail: temp space (or limit) is tight.
- Intermittent failures during peak traffic: temp inode exhaustion, session path, or concurrency spikes.
- Uploads succeed but processing fails: temp is fine; image libraries or memory are not.
Practical guidance
- Don’t mount
/tmpas a tiny tmpfs unless you’ve sized it for uploads and bursts. - If you’re on containers, be explicit: mount a persistent volume for uploads and a sufficiently sized scratch space for temp.
- Watch inode usage. It’s the silent killer on cache-heavy sites.
Image libraries: Imagick vs GD, and why “HTTP error” is sometimes a lie
WordPress generally tries to create multiple derivative sizes on upload. That means it needs a working image editor backend. If neither Imagick nor GD is usable, uploads may “fail” after transfer, during metadata generation.
How to choose in production
- GD: simplest dependency chain, good baseline. Install the PHP extension and you’re usually done.
- Imagick: more capable, often better handling of some formats and operations, but requires ImageMagick plus policy and resource limits that can surprise you.
Common library failure modes
- Missing extension: GD/Imagick not installed for the PHP version actually serving requests.
- Wrong PHP SAPI: you installed
php-gdfor CLI but FPM uses a different version; CLI tests pass, web fails. - ImageMagick policy restrictions: operations denied, memory/disk restricted, format blocked.
- Memory exhaustion: fatal errors when creating large thumbnails.
When in doubt, tail PHP-FPM logs during a reproducible upload attempt. If you see fatal errors in image editor classes, stop blaming WordPress and fix the runtime.
Security layers: SELinux/AppArmor, WAF rules, ModSecurity
Security systems fail “correctly,” which means they fail in ways that look like your app is broken. That’s their job. Your job is to confirm it fast.
SELinux: permissions that aren’t permissions
On SELinux systems, file contexts matter as much as UNIX modes. You can set 0777 and still get denied. If you see AVC denies for uploads, fix labels on the content directory and keep SELinux enforcing.
AppArmor
AppArmor can restrict PHP-FPM or the web server from writing to specific paths. The symptom can be identical to wrong ownership. Check profiles and logs if you’re on Ubuntu with AppArmor policies enabled.
ModSecurity and WAF rules
Multipart requests are a favorite target of generic rules. False positives happen. If you get 403 on uploads but normal admin pages work, check WAF logs for blocked rules on async-upload.php or REST endpoints.
Common mistakes: symptom → root cause → fix
“Unable to create directory wp-content/uploads/2025/12. Is its parent directory writable by the server?”
- Root cause: wrong ownership on
wp-content/uploads, missing execute bit on a parent directory, or SELinux denial. - Fix: set correct ownership/mode; verify by writing as runtime user; on SELinux, set proper context on uploads.
“HTTP error” immediately on upload
- Root cause: 413 request too large from Nginx/Apache/CDN, or WAF rejection.
- Fix: raise request body limits end-to-end; confirm with logs; bypass edge temporarily to isolate.
Upload completes but image never appears in Media Library
- Root cause: PHP fatal during metadata generation; database write failure; permissions prevent final move from temp.
- Fix: tail PHP logs; check database connectivity; verify temp dir and uploads dir; check PHP extensions.
Only large images fail, small images succeed
- Root cause:
upload_max_filesize,post_max_size,client_max_body_size, or temp space/memory limits. - Fix: increase limits; verify temp disk; raise memory limit for processing.
Uploads work on one server but not another (same code)
- Root cause: missing GD/Imagick, different ImageMagick policy, different SELinux/AppArmor posture, or different proxy config.
- Fix: compare
php -m, PHP ini values, Nginx/Apache config dumps, and security enforcement settings.
Uploads sometimes fail after deploy
- Root cause: deploy step resets uploads permissions or swaps volume mounts; containers lose ephemeral uploads.
- Fix: treat uploads as persistent volume; enforce ownership in deploy; add a health check that writes to uploads.
Uploads fail with 500 or 502 after a long wait
- Root cause: PHP-FPM timeout, upstream timeout, memory exhaustion, slow storage, or heavy Imagick operations.
- Fix: check upstream timeouts; raise PHP execution time; profile IO; adjust image sizes; consider offloading image processing.
Checklists / step-by-step plan
Checklist A: “I need this fixed in 10 minutes”
- Reproduce once with devtools open. Note status code and response.
- Tail Nginx/Apache error logs; tail PHP-FPM logs at the same time.
- Check
wp-content/uploadsownership and write test as runtime user. - Check disk and inode usage for uploads and temp.
- Check active PHP + web server body limits.
- Confirm GD or Imagick is installed for the serving PHP version.
- If SELinux is enforcing, check AVC denies before doing anything reckless.
Checklist B: “Make it not happen again”
- Monitor: disk usage, inode usage, and
/tmputilization with alerting. - Deploy hygiene: ensure your pipeline never changes ownership of persistent uploads unexpectedly.
- Config as code: keep PHP-FPM pool overrides and Nginx limits in version control.
- Library parity: standardize extension sets across environments; bake them into images.
- Edge policy: document and test WAF/CDN body size limits; include upload endpoints in allowlists as appropriate.
- Operational test: add a synthetic check that uploads a small image and verifies derivatives are created (in a safe test path).
Checklist C: storage-engineer sanity checks
- Confirm uploads live on durable storage, not ephemeral root in containers.
- Confirm filesystem supports your workload: lots of small files, frequent directory creation, occasional large writes.
- Confirm backup/restore retains uid/gid mapping (or you have a post-restore fixup).
- Confirm IO latency under load; slow disks cause timeouts that look like “WordPress errors.”
Three corporate mini-stories from production life
1) The incident caused by a wrong assumption
The migration looked straightforward: move a WordPress site from an aging VM to a newer host, keep the same directory layout, restore a backup, switch DNS. The team assumed that if the pages loaded, the hard part was done.
Within minutes, marketing tried to upload product images for a launch. Uploads failed with “Unable to create directory.” Someone did the usual ritual: restart PHP-FPM, clear caches, re-save permalinks. Nothing. The site “worked,” so everyone assumed the problem was WordPress.
The real problem was dull: the backup restore had been performed as root, creating wp-content/uploads as root:root. On the old host, a different setup had masked it with permissive ACLs. On the new host, the runtime was www-data and it could read but not write.
It took longer than it should have because the first response was to tune PHP limits. That’s the wrong assumption in action: “uploads failing means size.” The fix was a one-line chown, followed by adding a post-restore step that explicitly sets ownership on writable directories and verifies it by writing as the runtime user.
The lesson that stuck: don’t treat “site loads” as a pass condition. For WordPress, “can upload media” is a core production capability, not a nice-to-have.
2) The optimization that backfired
A performance-minded team wanted faster everything. They moved /tmp to tmpfs to reduce disk IO and speed up PHP sessions and uploads. On paper it was elegant: memory is faster than disk, and these are temporary files anyway.
For a while, it looked great. Page loads stabilized, and the server’s disk latency graphs looked cleaner. Then a content team uploaded a batch of high-resolution images—exactly the kind of thing that happens when someone’s preparing a campaign and has a deadline and no patience.
Uploads started failing randomly. Not all of them. Just enough to be infuriating. PHP logs showed sporadic “failed to write file to disk,” while the OS looked “fine” at a glance because overall disk had space.
The backfire was simple: the tmpfs was sized conservatively, and concurrent uploads plus thumbnail generation created bursts of temp usage. Tmpfs filled, inodes ran out, and PHP’s upload staging failed. The fix wasn’t to abandon tmpfs; it was to size it with headroom, monitor it, and move bulky temp workflows to a dedicated scratch volume when needed.
Optimizations that remove bottlenecks are great. Optimizations that create hidden bottlenecks are how you end up explaining to leadership why “faster” made the site unusable.
3) The boring but correct practice that saved the day
A different organization ran WordPress at scale with frequent content updates. They were not exciting people. They were the kind of people who label cables and enjoy change windows. Wonderful.
They had a standard “uploads health check” in their deployment pipeline. It did two things: verified the runtime user could write to the uploads path, and executed a small upload-through-PHP test that confirmed metadata generation ran without fatal errors. It took seconds.
One day, an OS patch introduced a subtle change: PHP-FPM restarted and loaded a different ini directory order, effectively dropping the GD extension in that environment. The site still served pages. Admin logins worked. But image uploads would have failed when the content team arrived.
The pipeline caught it immediately and blocked the deployment. Engineers installed the missing extension for the correct PHP version, restarted the service, and re-ran the health check. No outage. No scramble. No emergency “chmod 777.”
That’s the magic of boring correctness: it doesn’t prevent every failure, but it prevents the humiliating ones.
FAQ
1) Why does WordPress say “HTTP error” when the file permissions are wrong?
Because the upload endpoint can fail at different stages and WordPress sometimes collapses them into a generic message. Check web server and PHP logs for the real failure.
2) I increased upload_max_filesize but uploads still fail. What did I miss?
Usually one of: post_max_size is smaller, Nginx client_max_body_size is smaller, a PHP-FPM pool override is forcing smaller values, or the CDN/WAF has a lower cap.
3) Do I need both GD and Imagick?
No. You need at least one working image editor backend. GD is simpler to keep consistent. Imagick is fine if you manage ImageMagick policy and resource limits.
4) Upload works but thumbnails don’t generate. What should I check?
Image libraries and memory/time limits. Tail PHP-FPM logs during an upload. Missing gd/imagick or memory exhaustion is common.
5) Can a “disk full” issue present as a permissions error?
Yes. PHP may fail to write temp files and WordPress may report it poorly. Check df -h and df -i for uploads and temp paths.
6) What’s the safest permission model for uploads?
Keep code read-only for runtime if you can, and make wp-content/uploads writable by the runtime user/group (often via group ownership and 0775, or ACLs). Avoid world-writable.
7) Why does it only fail through the public domain, but works when I test locally?
Your edge layer (CDN/WAF/load balancer) is imposing limits or blocking multipart requests. Compare status codes and logs. Test by bypassing the edge to isolate.
8) How do I tell if SELinux is the problem?
If SELinux is enforcing and you see AVC denies involving php-fpm or the web server writing to uploads, it’s SELinux. Fix contexts; don’t “solve” it by disabling SELinux.
9) Why do containerized WordPress uploads disappear after redeploy?
Because uploads are written to the container filesystem, which is ephemeral. Mount a persistent volume for wp-content/uploads and treat it as data.
Conclusion: next steps that prevent repeats
If your WordPress can’t upload images, the fastest route is disciplined skepticism. Ignore the UI message. Trust the logs. Prove writability. Then chase limits. Then chase libraries. Only after that do you blame security policy.
Do these next
- Add an uploads health check (write test + derivative generation check) to deployments.
- Standardize runtime config (PHP limits, Nginx/Apache limits, PHP-FPM pool overrides) and keep it in version control.
- Monitor storage like you mean it: disk space, inode usage, and
/tmputilization with alerts. - Pick a library strategy (GD baseline or Imagick with explicit policy) and make it consistent across environments.
- Document edge limits so “it works in staging” stops being a surprise when the CDN gets involved.
Uploads are a production-critical pipeline. Treat them like one, and the Media Library will stop gaslighting your on-call rotation.