“WordPress is slow” is not a diagnosis. It’s a complaint, like “the car feels weird.” Sometimes it’s a plugin doing 400 database queries per page. Sometimes it’s the server wheezing because disk I/O is pegged. Sometimes it’s your cache working perfectly… for anonymous users, while logged-in traffic crawls like it’s paying by the query.
This guide is how you stop guessing. We’ll walk the stack from the outside in—network, web server, PHP-FPM, database, storage, plugins—using commands you can run today, and decisions you can make based on what you see.
Fast diagnosis playbook
If you only have 20 minutes before someone starts “just trying random plugins,” do this. The goal is to answer one question: where is the time going? The second goal is to prove it with evidence, not vibes.
1) First: measure TTFB and separate cache hits from misses
- Check one public page unauthenticated. Then a logged-in page. Compare TTFB and total time.
- If public pages are fast and logged-in is slow, don’t touch Nginx configs yet. It’s usually PHP, DB, or plugin behavior behind bypassed caches.
2) Second: check saturation on the host (CPU, memory pressure, disk I/O)
- CPU high with low iowait: too much PHP work or too many concurrent requests.
- iowait high: storage bottleneck or DB doing too much random I/O.
- Swap activity: you’re already late; fix memory pressure first.
3) Third: check PHP-FPM queueing and slow logs
- If FPM is maxing out workers or queueing requests, you’ll see slow TTFB and “it gets worse under load.”
- Turn on PHP-FPM slowlog temporarily. It’s the closest thing to a confession.
4) Fourth: check MySQL for slow queries and lock contention
- Slow query log + processlist + InnoDB status tell you if the database is the brake.
- If you see lots of “Sending data” or long-running queries on wp_options, you’re in WordPress-land, not kernel-land.
5) Fifth: identify which plugin/theme/code path is expensive
- Use WP-CLI to disable suspect plugins quickly on staging or during a controlled window.
- Correlate: request path → PHP stack trace (slowlog) → query fingerprints (slow query log).
That’s it. Five steps. If you follow them, you’ll stop arguing about “Apache vs Nginx” and start fixing the real bottleneck.
A few facts and history (useful, not trivia night)
- WordPress started in 2003 as a fork of b2/cafelog, built for a slower web where server-side rendering and page caching were the norm.
- MySQL query caching used to be a common trick for WordPress, but it was deprecated and removed in modern MySQL because it caused contention under concurrency.
- WP-Cron is not a real cron; it’s a “run scheduled jobs on page hits” mechanism. Under low traffic it’s unreliable; under high traffic it can be noisy.
- WooCommerce changed WordPress performance culture: it turned “a blog” into “a transactional system,” where latency is revenue and background jobs matter.
- PHP 7 was a step-change in WordPress performance compared to PHP 5.x. Many “WordPress is slow” stories from the old days are really “PHP was slow.”
- Autoloaded options in wp_options are loaded on every request. A bloated autoload set can punish every page, cached or not.
- HTTP/2 reduced the pain of many small assets, but it did nothing for a slow backend TTFB. People still confuse “loads slowly” with “downloads slowly.”
- Object caching (Redis/Memcached) helps when the same expensive queries happen repeatedly, but it can also mask an underlying query problem until the cache churns.
Define “slow” in numbers, not vibes
You can’t tune what you can’t describe. “Slow” needs at least three numbers:
- TTFB (Time To First Byte): backend time plus network latency to first response byte. This is where PHP, DB, and upstream waiting show up.
- Total load time: includes images, JS, CSS. Can be “fast backend, heavy frontend.” Different problem.
- Percentiles under load: p50 is where things are on a good day; p95 is where your customers start leaving; p99 is where your incident channel wakes up.
Also decide: are we debugging public anonymous traffic or logged-in/admin? WordPress caching is usually biased toward anonymous pages. Logged-in paths tend to bypass caches and hit the full stack every time.
Start outside: isolate TTFB, network, CDN, and cache
Before you SSH into anything, prove whether you have a backend problem or a frontend delivery problem. The easiest trap is optimizing the database when the CDN is misconfigured and every request is a cache miss. The second easiest trap is optimizing the CDN when the backend is actually spending 4 seconds generating HTML.
Cache reality check
If you use a CDN or reverse proxy (Cloudflare, Fastly, Varnish, Nginx cache), your first question is: am I getting cache hits? Look for cache headers. If none exist, you’re operating blind. Add them or configure them. A cache that can’t be observed is just a rumor.
Differentiate “slow first byte” from “slow download”
TTFB high means backend or upstream waiting. Download slow means payload size or client-side bottlenecks. If your TTFB is 3 seconds and the page is 200 KB, it’s not the user’s Wi‑Fi. It’s you.
Joke #1: If your TTFB is measured in seconds, congratulations—you’ve built a very small batch job and accidentally exposed it as a website.
Server bottlenecks: CPU, memory, disk I/O, and saturation
When WordPress gets slow, the server almost always tells you why. You just need to ask in the right order. SRE rule: measure saturation first, because saturation creates “random” symptoms everywhere else.
CPU: when PHP becomes a space heater
High CPU with low iowait usually means PHP is busy: heavy plugins, expensive templates, too many concurrent requests, or a thundering herd from cache misses.
But be careful: “CPU at 100%” on a small VM could simply mean you’re running a production store on a machine sized for a personal blog. I respect the optimism.
Memory pressure: the performance killer with good PR
Low free memory isn’t inherently bad; Linux uses memory for cache. The bad sign is swap activity or a rising rate of major page faults. If the box is swapping, your latency gets spiky. If it’s swapping under load, your site starts “randomly” timing out.
Disk I/O: iowait is the smell, not the fire
High iowait means the CPU is waiting on storage. WordPress can trigger I/O pain through:
- MySQL doing random reads because indexes are missing or buffer pool is too small.
- Many small file reads for PHP files on slow network storage.
- Logging at absurd rates (access logs, debug logs, slow logs on busy sites without rotation).
- Backups or virus scanners saturating disks.
Kernel and filesystem cache: fast until it isn’t
Linux page cache makes repeated reads fast—until memory pressure forces eviction. If performance falls off a cliff after a deployment or cache flush, and then slowly “warms up,” that can be filesystem cache behavior or application/object cache behavior. Different fixes, similar symptoms.
Web server and PHP-FPM: the usual suspects
In a modern WordPress stack, the request usually flows: Nginx/Apache → PHP-FPM → WordPress → MySQL → back. Your bottleneck often shows up as queueing somewhere along that chain.
Web server: keep it boring
Nginx and Apache can both serve WordPress well. Most “web server performance” issues are not about which one you chose; they’re about misconfiguration, lack of caching, or TLS overhead on underpowered hardware.
PHP-FPM: where queueing becomes user-visible
PHP-FPM has a simple failure mode: too many concurrent requests, not enough workers, and each worker is busy for too long. That creates a queue. The queue creates latency. Then the load balancer retries. Then you get more traffic. Then everyone learns new words.
PHP-FPM’s slow log is your friend. It points to the code path. It also ends arguments. If the slow log says 70% of time is spent in a plugin’s API call, you don’t need a philosophical debate about opcache settings.
OPcache: the low-risk win
OPcache is not optional on production WordPress. Without it, you’re recompiling PHP scripts constantly. That’s not “dynamic.” That’s waste. Make sure it’s enabled, sized sanely, and not constantly restarting.
Database: MySQL/MariaDB bottlenecks and query pathologies
WordPress uses MySQL like a workhorse, and sometimes like a rented mule. The database is often the bottleneck because it’s shared, under-provisioned, or asked to do pathological queries.
Classic WordPress database pain points
- wp_options autoload bloat: huge autoloaded options mean every request drags a suitcase of data from the DB into PHP memory.
- Non-indexed meta queries: wp_postmeta and wp_usermeta can become query graveyards.
- WooCommerce order queries: complex joins and meta lookups can crush slow storage.
- Locking: long transactions or ALTER operations can block reads/writes and produce spikes.
Slow query log: the best “why is it slow” document you’ll ever generate
Enable slow query logging with a low threshold temporarily (say 0.5–1s) to capture the worst offenders. Then group by fingerprint (query shape), not exact text. The same query repeated is the real cost center.
InnoDB buffer pool: the memory lever
If the buffer pool is tiny relative to the working set, MySQL becomes an I/O generator. On dedicated DB hosts, buffer pool is often set to ~60–75% of RAM. On shared hosts, you need to be conservative or MySQL will fight the OS and lose.
Quote (paraphrased idea): “Hope is not a strategy.” — often attributed in operations culture, commonly linked to reliability engineering thinking.
WordPress and plugins: proving who’s guilty
Plugins are code you didn’t write, running with the same privileges as the code you did. Treat them like vendors with shell access: assume they will do something surprising, eventually.
Expensive patterns you can actually detect
- Too many queries per request: especially in admin screens and WooCommerce.
- External HTTP calls on page load: marketing pixels, license checks, API calls to “phone home.”
- Uncached template logic: loops that trigger N+1 queries.
- Autoloaded junk: storing big arrays in options with autoload=yes.
- Search and filtering: poorly indexed search, wildcard LIKE queries, meta queries without constraints.
Don’t “optimize” by adding more plugins
Caching plugins can help, but stacking them is a hobby, not an engineering plan. One page cache, one object cache, one CDN strategy. Pick deliberately. Observe. Iterate.
Joke #2: Every time you install “Ultimate Speed Booster Pro,” a slow query gets its wings.
WP-Cron and background work: the hidden tax
WP-Cron runs scheduled tasks when someone hits the site. That means:
- On low-traffic sites, tasks might not run on time.
- On high-traffic sites, tasks may run too often, overlapping and spiking CPU/DB.
If your site gets slow “every few minutes,” WP-Cron is a prime suspect. So are backups, imports, cache warmers, image optimization tasks, and newsletter sync jobs.
The grown-up move is to disable WP-Cron triggering on page loads and run it via system cron at a controlled cadence.
Practical tasks (commands + what the output means + the decision you make)
These are production-style checks. Run them in order. Each one tells you what to do next. That’s the point.
Task 1: Measure TTFB and total time from the server edge
cr0x@server:~$ curl -s -o /dev/null -w "namelookup:%{time_namelookup}\nconnect:%{time_connect}\nstarttransfer:%{time_starttransfer}\ntotal:%{time_total}\n" https://example.com/
namelookup:0.004
connect:0.021
starttransfer:1.842
total:1.913
Meaning: starttransfer is basically TTFB (plus network). Here it’s ~1.84s, so the backend is slow.
Decision: If starttransfer is high but connect is low, stop blaming DNS/CDN. Move to server/PHP/DB.
Task 2: Compare anonymous vs logged-in/admin response time
cr0x@server:~$ curl -s -o /dev/null -w "TTFB:%{time_starttransfer} total:%{time_total}\n" https://example.com/wp-admin/
TTFB:3.912 total:4.105
Meaning: wp-admin is much slower. Caches likely bypassed; PHP/DB work is heavier.
Decision: Prioritize PHP-FPM and DB inspection; don’t waste time tuning static asset caching.
Task 3: Check system load, CPU, and iowait quickly
cr0x@server:~$ uptime
14:22:18 up 36 days, 2:03, 1 user, load average: 8.72, 7.91, 6.88
cr0x@server:~$ mpstat -P ALL 1 3
Linux 6.5.0 (server) 12/27/2025 _x86_64_ (4 CPU)
Average: CPU %usr %nice %sys %iowait %irq %soft %steal %idle
Average: all 62.11 0.00 7.42 18.93 0.00 0.37 0.00 11.17
Meaning: load average is high relative to 4 CPUs, and iowait is ~19%. Disk is contributing.
Decision: Check disk latency and MySQL I/O patterns next; don’t just increase PHP workers.
Task 4: See if memory pressure or swap is involved
cr0x@server:~$ free -h
total used free shared buff/cache available
Mem: 8.0Gi 6.7Gi 220Mi 210Mi 1.1Gi 680Mi
Swap: 2.0Gi 1.2Gi 820Mi
cr0x@server:~$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 1 1249280 225384 9120 988224 10 24 612 430 510 890 62 7 12 19 0
Meaning: swap is used and there’s ongoing swap in/out (si/so). That causes latency spikes.
Decision: Reduce memory pressure: right-size VM, reduce PHP-FPM max children, fix MySQL memory, or add RAM before micro-optimizations.
Task 5: Identify top CPU and memory consumers
cr0x@server:~$ ps -eo pid,comm,%cpu,%mem --sort=-%cpu | head
2841 php-fpm8.2 38.2 3.1
2910 php-fpm8.2 34.9 3.0
1732 mysqld 21.4 24.8
1120 nginx 3.1 0.4
Meaning: PHP-FPM is chewing CPU, MySQL is heavy on memory (could be fine) and non-trivial CPU.
Decision: Inspect PHP-FPM worker saturation and MySQL query behavior in parallel.
Task 6: Check disk latency and queue depth
cr0x@server:~$ iostat -x 1 3
Linux 6.5.0 (server) 12/27/2025 _x86_64_ (4 CPU)
Device r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await %util
nvme0n1 52.0 38.0 3.1 5.7 175.2 4.21 28.40 21.10 37.20 94.8
Meaning: await ~28ms and %util ~95% means the disk is saturated. MySQL and PHP file reads will suffer.
Decision: Reduce I/O: fix slow queries/indexes, increase MySQL buffer pool if memory allows, move DB to faster storage, or separate DB host.
Task 7: Check PHP-FPM pool status for queueing (if enabled)
cr0x@server:~$ sudo curl -s http://127.0.0.1/status?full | sed -n '1,25p'
pool: www
process manager: dynamic
start time: 27/Dec/2025:13:55:21 +0000
start since: 1602
accepted conn: 19482
listen queue: 37
max listen queue: 211
listen queue len: 128
idle processes: 0
active processes: 24
total processes: 24
max active processes: 24
max children reached: 19
slow requests: 83
Meaning: listen queue is non-zero and “max children reached” is high. Requests are waiting for workers.
Decision: Don’t blindly raise max children. First reduce per-request cost (slowlog, DB) and confirm memory headroom; otherwise you just amplify swap and make it worse.
Task 8: Enable and read PHP-FPM slowlog (temporary, targeted)
cr0x@server:~$ sudo grep -nE 'slowlog|request_slowlog_timeout' /etc/php/8.2/fpm/pool.d/www.conf
261:request_slowlog_timeout = 2s
262:slowlog = /var/log/php8.2-fpm.slow.log
cr0x@server:~$ sudo tail -n 20 /var/log/php8.2-fpm.slow.log
[27-Dec-2025 14:20:11] [pool www] pid 2910
script_filename = /var/www/html/index.php
[0x00007f2a8c1a3f60] wp_remote_get() /var/www/html/wp-includes/http.php:271
[0x00007f2a8c1a3df0] some_plugin_license_check() /var/www/html/wp-content/plugins/some-plugin/core.php:812
Meaning: PHP is blocking on an outbound HTTP call inside a plugin.
Decision: Fix plugin behavior (disable, configure, cache results, move to async). Increasing CPU won’t fix waiting on the network.
Task 9: Confirm OPcache is enabled and not tiny
cr0x@server:~$ php -i | grep -E 'opcache.enable|opcache.memory_consumption|opcache.max_accelerated_files' | head -n 5
opcache.enable => On => On
opcache.memory_consumption => 256 => 256
opcache.max_accelerated_files => 20000 => 20000
Meaning: OPcache is on and sized reasonably for many WP files.
Decision: If OPcache is off, turn it on. If memory is too small, you’ll see frequent cache resets and extra CPU.
Task 10: Check MySQL currently running queries and lock symptoms
cr0x@server:~$ sudo mysql -e "SHOW FULL PROCESSLIST\G" | sed -n '1,60p'
*************************** 1. row ***************************
Id: 2198
User: wp
Host: 127.0.0.1:49822
db: wordpress
Command: Query
Time: 12
State: Sending data
Info: SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'
*************************** 2. row ***************************
Id: 2202
User: wp
Host: 127.0.0.1:49836
db: wordpress
Command: Query
Time: 9
State: statistics
Info: SELECT * FROM wp_postmeta WHERE meta_key = '_price' AND meta_value BETWEEN '10' AND '50'
Meaning: Long-running queries. The autoload query should be fast; if it isn’t, wp_options is bloated or the server is I/O bound. The postmeta query is a classic pain point.
Decision: Enable slow query log and inspect indexes/option autoload size. Consider WooCommerce-specific indexing strategies and reducing meta queries.
Task 11: Enable slow query logging (short window) and inspect it
cr0x@server:~$ sudo mysql -e "SET GLOBAL slow_query_log = 'ON'; SET GLOBAL long_query_time = 0.5; SET GLOBAL log_output = 'FILE';"
cr0x@server:~$ sudo tail -n 20 /var/log/mysql/mysql-slow.log
# Time: 2025-12-27T14:24:02.118532Z
# User@Host: wp[wp] @ localhost [] Id: 2251
# Query_time: 1.734 Lock_time: 0.000 Rows_sent: 1 Rows_examined: 874221
SET timestamp=1766845442;
SELECT * FROM wp_postmeta WHERE meta_key = '_price' AND meta_value BETWEEN '10' AND '50';
Meaning: Rows_examined is huge relative to rows returned. That’s a missing index / wrong schema pattern problem.
Decision: Stop tuning buffer sizes as your first move. Fix the query path: adjust plugin behavior, reduce meta filtering, add appropriate indexes where safe, or use dedicated lookup tables features where available.
Task 12: Check InnoDB buffer pool effectiveness
cr0x@server:~$ sudo mysql -e "SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_read%';"
+---------------------------------------+-----------+
| Variable_name | Value |
+---------------------------------------+-----------+
| Innodb_buffer_pool_read_requests | 987654321 |
| Innodb_buffer_pool_reads | 3456789 |
+---------------------------------------+-----------+
Meaning: Buffer pool reads indicates reads from disk; requests are logical reads. If reads is high relative to requests, your working set doesn’t fit in memory.
Decision: If you have RAM headroom, increase innodb_buffer_pool_size. If not, reduce working set (clean autoloads, fix queries, archive old data).
Task 13: Find autoloaded options size (classic silent killer)
cr0x@server:~$ sudo mysql -D wordpress -e "SELECT ROUND(SUM(LENGTH(option_value))/1024/1024,2) AS autoload_mb FROM wp_options WHERE autoload='yes';"
+------------+
| autoload_mb|
+------------+
| 18.47 |
+------------+
Meaning: 18MB of autoloaded options is massive; this gets pulled into memory frequently and can slow every uncached request.
Decision: Identify the largest autoloaded options and fix the source (plugin settings, transients misuse). Aim for low single-digit MB or less.
Task 14: Identify biggest autoloaded options (so you know who to yell at)
cr0x@server:~$ sudo mysql -D wordpress -e "SELECT option_name, autoload, ROUND(LENGTH(option_value)/1024,1) AS kb FROM wp_options WHERE autoload='yes' ORDER BY LENGTH(option_value) DESC LIMIT 10;"
+-------------------------------+----------+------+
| option_name | autoload | kb |
+-------------------------------+----------+------+
| some_plugin_big_settings | yes | 5120 |
| rewrite_rules | yes | 980 |
| another_plugin_cache_blob | yes | 740 |
+-------------------------------+----------+------+
Meaning: A plugin is autoloading multi-megabyte blobs. That’s not “settings,” that’s a cry for help.
Decision: Reconfigure/replace the plugin, or change autoload to no for specific options (carefully, after validation). Also flush and rebuild rewrite rules if that entry is bloated.
Task 15: Check whether WP-Cron is hammering requests
cr0x@server:~$ wp cron event list --fields=hook,next_run,recurrence --format=table
+------------------------------+---------------------+------------+
| hook | next_run | recurrence |
+------------------------------+---------------------+------------+
| wp_version_check | 2025-12-27 14:25:00 | twice_daily|
| some_plugin_sync_job | 2025-12-27 14:23:00 | every_min |
| woocommerce_cleanup_sessions | 2025-12-27 14:26:00 | hourly |
+------------------------------+---------------------+------------+
Meaning: “every_min” jobs can be legitimate, but they often overlap and cause periodic spikes.
Decision: Move cron to system cron, reduce frequency, and ensure long jobs have locking to prevent overlap.
Task 16: Quickly test plugin impact (staging or controlled window)
cr0x@server:~$ wp plugin list --status=active --format=table
+---------------------+--------+-----------+---------+
| name | status | update | version |
+---------------------+--------+-----------+---------+
| woocommerce | active | available | 8.4.0 |
| some-plugin | active | none | 3.2.1 |
| seo-suite | active | none | 19.0 |
+---------------------+--------+-----------+---------+
cr0x@server:~$ wp plugin deactivate some-plugin
Plugin 'some-plugin' deactivated.
Meaning: You can validate whether the slowlog culprit actually drives latency.
Decision: If latency drops materially, keep it disabled and plan a safer replacement or vendor fix. If nothing changes, re-enable and keep hunting.
Three corporate mini-stories (how this goes wrong in real life)
Mini-story 1: The incident caused by a wrong assumption
A company ran a content-heavy WordPress site with a separate WooCommerce store. The content site started timing out during a campaign. Everyone assumed the CDN was the problem because “we changed cache rules last week.” It felt plausible. It was also wrong.
The on-call engineer did the boring thing: curl timing from multiple regions, then SSH, then iostat. TTFB was high even from inside the VPC. Disk %util was pinned, iowait high. MySQL was local on the same VM. The CDN was innocent; it was just faithfully delivering slow responses.
The wrong assumption was that “static pages should be cached, so the origin can’t be the problem.” In reality, logged-in editors were hammering uncached endpoints and triggering expensive option autoload loads on every request. The campaign increased admin activity: more drafts, revisions, previews, and plugin-driven API calls.
The fix wasn’t heroic: reduce autoload bloat, separate DB onto its own faster instance, and add observability around cache hit rates and PHP-FPM queue depth. The incident ended when they stopped arguing about the edge and started measuring the origin.
Mini-story 2: The optimization that backfired
Another team had slow checkout pages and decided to “solve it with caching.” They added a page cache layer and cranked cache TTL aggressively. Anonymous product pages looked great. The dashboard graphs improved. People celebrated.
Then customer support tickets spiked. Shoppers saw stale inventory and inconsistent pricing. Logged-in users were still slow, and now the system was harder to reason about because caches masked backend issues. The page cache was also incorrectly caching some personalized fragments. Nothing like shipping a performance improvement that also ships confusion.
When they finally enabled PHP-FPM slowlog, the culprit was a plugin calling external tax/shipping APIs synchronously during checkout, with no timeout discipline and no caching of results. The “optimization” didn’t touch that path at all; it just made some charts greener.
They backed off the aggressive caching, implemented sane timeouts, added asynchronous fallbacks where business rules allowed, and cached API responses with careful keys. Performance improved, correctness returned, and everyone learned the same lesson: caching is not a substitute for understanding.
Mini-story 3: The boring but correct practice that saved the day
A media org ran WordPress at steady high traffic. Nothing glamorous: news cycles, lots of reads, occasional spikes. Their secret weapon wasn’t a magic plugin. It was discipline.
They had slow query logging enabled with a reasonable threshold, rotated and shipped logs, and reviewed top query fingerprints weekly. Not during incidents. Weekly. They also tracked PHP-FPM “max children reached” and MySQL buffer pool hit rate as standard health metrics.
One Friday afternoon, latency started creeping up. No outage yet—just a subtle p95 climb. Because they had baselines, the on-call immediately saw buffer pool reads rising and disk await climbing. It correlated with a new feature rollout in a plugin that introduced a meta query on high-cardinality data.
They rolled back the plugin change, added an index in staging, verified the query plan, then re-deployed. Users barely noticed. The team went home. Observability didn’t make them faster typists; it made them less surprised.
Common mistakes: symptom → root cause → fix
1) Symptom: homepage is fast, wp-admin is painfully slow
Root cause: page caching helps anonymous users; admin bypasses cache and triggers heavy DB queries, external calls, or autoload bloat.
Fix: enable PHP-FPM slowlog; check autoload size; audit plugins that hook into admin screens; add object cache; reduce wp_options autoload.
2) Symptom: performance is fine until traffic spikes, then everything times out
Root cause: saturation and queueing: PHP-FPM max children reached, DB connection limits, or disk I/O pegged.
Fix: measure queue depth (FPM status), raise capacity carefully (more CPU/RAM/faster disk), add caching, and reduce per-request work.
3) Symptom: periodic slowness every few minutes
Root cause: WP-Cron or scheduled jobs overlapping, backups, log rotation misfires, or external sync tasks.
Fix: move WP-Cron to system cron; add locking; schedule heavy jobs off-peak; monitor cron duration.
4) Symptom: CPU low, but pages still take seconds to start loading
Root cause: blocking I/O: slow disk, slow DNS in outbound calls, external APIs, or DB waits.
Fix: check iowait/iostat; inspect PHP slowlog for wp_remote_* calls; add timeouts and caching for external calls.
5) Symptom: after installing a plugin, TTFB doubles
Root cause: plugin adds expensive hooks, autoloaded options, or heavy meta queries on every request.
Fix: disable and confirm; inspect slowlog and slow query log; replace plugin or change configuration; clean up autoload.
6) Symptom: database CPU high, lots of “Sending data” queries
Root cause: unindexed queries, big table scans, or high Rows_examined in slow log. Sometimes caused by search/filter features.
Fix: review slow query fingerprints; add/adjust indexes cautiously; reduce meta-query usage; consider alternative data modeling for product attributes.
7) Symptom: fast after restart, slow later
Root cause: memory leaks or cache growth (object cache, opcache fragmentation), or buffer pool eviction due to working set growth.
Fix: watch memory over time; confirm OPcache settings; ensure MySQL buffer pool sized appropriately; cap caches and purge intelligently.
Checklists / step-by-step plan
Checklist A: 30-minute triage (production-safe)
- Measure TTFB and total time for one public URL and one admin URL.
- Check host saturation: uptime, mpstat, vmstat, iostat.
- Inspect PHP-FPM status: queue length, max children reached, slow requests.
- Check MySQL processlist for long-running queries; capture InnoDB status if needed.
- If disk saturated: prioritize DB query fixes and storage upgrades over “more PHP workers.”
- If CPU saturated in PHP: enable slowlog and find the code path.
- Write down: what changed recently (plugin updates, theme changes, DB growth, traffic shift).
Checklist B: 1–2 day stabilization plan
- Enable slow query log (threshold 0.5–1s) during peak windows; collect top query fingerprints.
- Enable PHP-FPM slowlog at 2s temporarily; identify slow call stacks.
- Audit wp_options autoload size; reduce it aggressively but safely.
- Move WP-Cron to system cron; prevent overlapping jobs.
- Confirm OPcache sizing and PHP-FPM worker counts match RAM realities.
- Add object cache (Redis) if you have repeated query patterns and stable cache hit rates.
- Test changes on staging with production-like data volume. “Works on a fresh install” is not a performance test.
Checklist C: Structural improvements (the stuff that ends recurring incidents)
- Separate DB from web if resource contention is real and constant.
- Put the DB on fast, low-latency storage; avoid slow network-attached disks for write-heavy stores.
- Instrument cache hit/miss rates, PHP-FPM queue, and DB latency as first-class metrics.
- Establish a plugin policy: owners, update cadence, rollback plan, and performance budget.
- Regularly prune revisions, transients, sessions, and orphaned meta rows based on business needs.
- Load test major releases with realistic concurrency and logged-in flows, not just anonymous homepage hits.
FAQ
1) How do I know if the bottleneck is the server or the database?
Look at iowait, disk await, and MySQL slow queries. If iostat shows high await/%util and MySQL slow log shows high Rows_examined, the DB+disk combo is likely the brake.
2) My CPU is low, but WordPress is still slow. How?
Waiting doesn’t burn CPU. Disk waits, network waits (external APIs), and lock waits can produce high TTFB with low CPU. PHP-FPM slowlog often reveals blocking calls.
3) Should I increase PHP-FPM max_children to fix slowness?
Only if you have RAM headroom and your requests are already efficient. If you’re swapping or disk-bound, more workers increase contention and make latency worse.
4) Does Redis automatically make WordPress fast?
No. Redis helps when repeated expensive lookups occur and can be cached safely. It won’t fix unindexed queries that are unique every time, or plugins doing external calls.
5) Why is wp-admin slower than the frontend?
Admin screens typically bypass page caches, run more queries, load more code, and trigger plugin hooks. Also: logged-in cookies often prevent CDN caching by design.
6) What’s the fastest way to identify a bad plugin?
PHP-FPM slowlog for stack traces plus WP-CLI to selectively disable plugins (preferably on staging). Correlate with DB slow queries. Don’t rely on “it feels like it started after…” without evidence.
7) Is WooCommerce inherently slow?
Not inherently, but it’s easy to make slow because it stores a lot of structured data in meta tables and triggers complex queries. Checkout paths are also sensitive to external integrations.
8) Why do things get slow periodically, like clockwork?
WP-Cron, backups, logrotate, indexing jobs, product sync tasks, or cache purges. Correlate with cron logs and system activity (iostat spikes, MySQL spikes) at those times.
9) If I move to a bigger server, will it fix everything?
It can buy time, and sometimes that’s the right call. But if you have a pathological query scanning millions of rows, it will come back when data grows again—often at the worst time.
10) Should I use Nginx or Apache for performance?
Pick the one you can operate well. Most WordPress slowness is upstream (PHP/DB/plugins) or caching strategy. Web server choice is rarely the real bottleneck.
Next steps (what to do Monday morning)
- Measure TTFB for public and admin paths and write the numbers down. If you can’t quantify, you can’t improve.
- Check saturation first: CPU, memory pressure, disk latency. Fix swap and disk pegging before tuning plugin settings.
- Turn on the two truth-tellers (temporarily, carefully): PHP-FPM slowlog and MySQL slow query log.
- Fix the big rocks: autoload bloat, slow query fingerprints, synchronous external API calls, and WP-Cron chaos.
- Make it observable: FPM queue, cache hit rate, DB latency. If you can’t see it, you’ll relive it.
WordPress performance isn’t mysterious. It’s just layered. Peel the onion in the right order, and you’ll find the bottleneck without sacrificing a weekend to superstition.