WordPress Database Bloated: wp_options Autoload Cleanup Without Breaking Things

Was this helpful?

When WordPress gets “mysteriously slow,” the culprit is often not PHP, not the theme, not even your database server. It’s a single table row pattern: wp_options stuffed with megabytes of autoloaded junk that WordPress dutifully drags into memory on every request.

You notice it as rising TTFB, random spikes during traffic bursts, admin pages that feel like dial-up, and database CPU that idles until it suddenly doesn’t. The best part? The site can still “work,” right up until it doesn’t. Let’s clean it safely, like adults running production systems.

Table of contents

What autoload really means (and why it hurts)

wp_options is WordPress’s “catch-all drawer.” Settings, plugin state, caches, feature flags, API tokens, cron schedules, and “temporary” values that live forever—most of them end up here.

The key field: autoload. If an option has autoload='yes', WordPress will try to load it early (via wp_load_alloptions()) so it can serve requests without lots of tiny queries. That’s a good idea when autoloaded data is small and stable. It is a bad idea when it’s 10–100+ MB of serialized arrays, widget configs, giant plugin caches, and the broken dreams of a marketing plugin that never met a database it didn’t want to over-share with.

Here’s the failure mode: autoloaded options are fetched on most requests (front-end and admin). Even if you have an object cache, those values still need to be pulled into memory at least once per cache cycle and often get copied/serialized. The bigger it gets, the more CPU and memory you burn, and the more you amplify latency during cache misses, deploys, pod restarts, or failovers.

One dry truth: you don’t “optimize MySQL” out of this. You stop autoloading garbage.

Short joke #1: Autoload bloat is like a junk drawer: it starts with one spare battery and ends with three mystery keys and a tiny screwdriver you can’t throw away.

Autoload isn’t “cache”

Autoloaded options are not a smart cache with TTL semantics. They’re “load this every time because someone assumed it’s needed.” That assumption is frequently wrong, especially for plugins that store large computed structures or logs as options.

Why it hits performance in weird, bursty ways

  • Cold starts: FPM restarts, container reschedules, new instances scaling up—autoloaded options need to be warmed. That’s when the site gets slowest.
  • Object cache churn: If your object cache evicts frequently, you reload autoloaded data more often than you think.
  • Replication lag: Large options updates create bigger binlogs/row changes, increasing lag under load.
  • Locking/IO: Updates to big options can mean big writes, and on shared storage or saturated disks, you pay in latency.

Fast diagnosis playbook

If a WordPress site is slow and you suspect wp_options autoload bloat, don’t wander. Check three things in this order:

1) Is autoloaded data huge?

Run a quick sum of autoloaded option sizes. If it’s above a few MB, you have a lead. If it’s tens of MB, you have your culprit.

2) Which options are doing the damage?

Find the top offenders by size. You’re looking for giant serialized arrays, plugin caches, and anything that clearly doesn’t need to load on every request.

3) Is the slowness tied to cache misses/restarts?

Correlate spikes with FPM restarts, container reschedules, or Redis/Memcached evictions. Autoload bloat amplifies cold-start pain.

Then decide: delete obvious trash (expired transients, plugin leftovers) and switch “big but needed sometimes” from autoloaded to non-autoloaded, ideally replacing it with a proper cache.

Interesting facts and history you can weaponize

  1. WordPress options predate many modern plugins: the options API was designed for small settings, not megabyte-scale blobs.
  2. Autoload was meant to reduce query count: in the early days, lots of small queries were the bigger evil than memory bloat.
  3. Serialized PHP arrays made it easy to store anything: also made it easy to store everything, forever, with no schema pressure.
  4. Transients were introduced as “temporary cache”: but without a persistent object cache, transients often live in wp_options and can accumulate.
  5. Many plugins use wp_options as a dumping ground: because it’s universally available, not because it’s appropriate.
  6. WooCommerce sites are frequent victims: not because WooCommerce is uniquely bad, but because high plugin density increases option churn.
  7. Autoload bloat can hide behind a CDN: pages still “load fast” from cache, until cache expires or a logged-in user hits dynamic endpoints.
  8. Database size isn’t the metric that matters: a 2 GB database can be fine; a 20 MB autoload payload on every request is not.
  9. Some hosts “optimize” by scheduling OPTIMIZE TABLE: which doesn’t fix autoload bloat and can add IO/locking pain when run at the wrong time.

Measure first: what “bloat” looks like in real numbers

Production work starts with measurement. Not vibes. The goal is to answer:

  • How big is the autoloaded payload?
  • Who are the top offenders?
  • Is it growing?
  • What is safe to delete vs. merely disable autoload?

Rules of thumb (opinionated, because you asked for it):

  • < 1 MB autoload total: probably fine. Don’t create busywork.
  • 1–5 MB: keep an eye on it; investigate a few top items.
  • 5–20 MB: you’re paying latency tax. Clean it.
  • > 20 MB: you’re in incident territory; cold starts will be ugly.

These thresholds vary with hardware and caching. But in practice, once autoload crosses “a few MB,” it becomes a reliability risk because it multiplies the cost of every cache miss event.

And one reliability mantra from the field: Everything fails all the time. — Werner Vogels (paraphrased idea). Autoload bloat turns those routine failures (restarts, evictions) into user-visible slowness.

Practical tasks: commands, outputs, decisions (12+)

These tasks are written for operators. They include the command, sample output, what it means, and what decision to make. Assume you have shell access and either WP-CLI or MySQL client access. Adjust DB names/prefixes as needed.

Task 1: Confirm table prefix and DB credentials (don’t guess)

cr0x@server:~$ cd /var/www/html && wp config get table_prefix
wp_

What it means: Your table prefix is wp_. If you assumed wp_ and it’s actually wpx9_, every SQL query you run will be wrong or dangerous.

Decision: Use the returned prefix in all subsequent queries. If WP-CLI can’t read config, stop and fix access first.

Task 2: Find autoload payload size (quick and decisive)

cr0x@server:~$ wp db query "SELECT ROUND(SUM(LENGTH(option_value))/1024/1024,2) AS autoload_mb FROM wp_options WHERE autoload='yes';"
+------------+
| autoload_mb|
+------------+
| 28.74      |
+------------+

What it means: ~29 MB is loaded as “alloptions.” That’s not “a little slow.” That’s a recurring tax on every cold request.

Decision: Proceed to identify top offenders; plan a controlled cleanup window.

Task 3: Identify top autoloaded options by size

cr0x@server:~$ wp db query "SELECT option_name, ROUND(LENGTH(option_value)/1024/1024,2) AS mb FROM wp_options WHERE autoload='yes' ORDER BY LENGTH(option_value) DESC LIMIT 20;"
+------------------------------+------+
| option_name                  | mb   |
+------------------------------+------+
| plugin_x_cache_blob          | 12.40|
| widget_custom_html           | 4.12 |
| rewrite_rules                | 2.33 |
| some_builder_global_settings | 1.98 |
| cron                         | 1.21 |
+------------------------------+------+

What it means: One or two options are dominating. That’s good news: you can get big wins without playing whack-a-mole.

Decision: For each top offender: decide delete vs. de-autoload vs. leave alone. Start with obvious cache blobs and leftover plugin options.

Task 4: Check if a suspicious option is a transient (usually safe to purge)

cr0x@server:~$ wp db query "SELECT option_name, autoload FROM wp_options WHERE option_name LIKE '\_transient\_%' LIMIT 5;"
+------------------------------+----------+
| option_name                  | autoload |
+------------------------------+----------+
| _transient_timeout_feed_123  | no       |
| _transient_feed_123          | no       |
| _transient_timeout_xxx       | no       |
| _transient_xxx               | no       |
| _transient_timeout_abc       | no       |
+------------------------------+----------+

What it means: Most transients are not autoloaded, but they still bloat the table and slow scans/backups. Some plugins mistakenly autoload transients or store “transient-like” data without the prefix.

Decision: Plan to delete expired transients. If many exist, automate cleanup.

Task 5: Count expired transients (cheap win)

cr0x@server:~$ wp db query "SELECT COUNT(*) AS expired FROM wp_options WHERE option_name LIKE '\_transient\_timeout\_%' AND option_value < UNIX_TIMESTAMP();"
+---------+
| expired |
+---------+
| 18742   |
+---------+

What it means: You have a lot of expired transients. WordPress doesn’t always clean them aggressively, and plugins can be sloppy.

Decision: Delete them (and their paired values) in a controlled way.

Task 6: Delete expired transients via WP-CLI (safe-ish and reversible by regeneration)

cr0x@server:~$ wp transient delete --expired
Success: Deleted 18742 expired transients.

What it means: You removed expired cache entries. If anything breaks, it’s a plugin bug relying on expired data existing (yes, that happens).

Decision: Re-check table size and autoload size. If performance improves, keep going.

Task 7: Re-measure autoload payload after transient cleanup

cr0x@server:~$ wp db query "SELECT ROUND(SUM(LENGTH(option_value))/1024/1024,2) AS autoload_mb FROM wp_options WHERE autoload='yes';"
+------------+
| autoload_mb|
+------------+
| 27.90      |
+------------+

What it means: Transients weren’t the main cause; the bulk is still autoloaded options like cache blobs.

Decision: Target the top offenders directly.

Task 8: Snapshot the option value before changing it (you want rollback)

cr0x@server:~$ wp option get plugin_x_cache_blob --format=json | head -c 200
{"version":"4.2.1","generated_at":1735230932,"data":{"items":[{"id":1,"name":"..."}]}}

What it means: You can see it’s a generated cache structure, not a core setting. It also smells like something that could be moved out of autoload.

Decision: Save the full value to a file (or take a DB backup) before you touch it. Then consider disabling autoload first, not deleting.

Task 9: Disable autoload for a known cache blob (minimal functional risk)

cr0x@server:~$ wp db query "UPDATE wp_options SET autoload='no' WHERE option_name='plugin_x_cache_blob' LIMIT 1;"
Query OK, 1 row affected (0.02 sec)
Rows matched: 1  Changed: 1  Warnings: 0

What it means: That option still exists; WordPress won’t load it automatically. The plugin will fetch it when needed.

Decision: Immediately test key flows (front page, checkout, login, admin). If the plugin assumed autoload, you’ll find out fast.

Task 10: Validate autoload payload after de-autoloading one big offender

cr0x@server:~$ wp db query "SELECT ROUND(SUM(LENGTH(option_value))/1024/1024,2) AS autoload_mb FROM wp_options WHERE autoload='yes';"
+------------+
| autoload_mb|
+------------+
| 15.50      |
+------------+

What it means: You just cut the autoload payload almost in half. That’s the kind of change that shows up in p95 latency.

Decision: Keep going through the top offenders list. Prefer “autoload=no” over “DELETE” unless you’re sure.

Task 11: Find orphaned options from deactivated plugins (cleanup candidates)

cr0x@server:~$ wp plugin list --status=inactive
+----------------------+----------+-----------+---------+
| name                 | status   | update    | version |
+----------------------+----------+-----------+---------+
| old-gallery-plugin   | inactive | none      | 1.9.0   |
| legacy-seo-tool      | inactive | available | 2.3.1   |
+----------------------+----------+-----------+---------+

What it means: Inactive plugins often leave options behind. Some are harmless. Some are huge. Inactive + huge + autoload=yes is a gift.

Decision: Search for option prefixes associated with inactive plugins, then delete those options after backup.

Task 12: Locate options by name pattern and size before deleting

cr0x@server:~$ wp db query "SELECT option_name, autoload, ROUND(LENGTH(option_value)/1024/1024,2) AS mb FROM wp_options WHERE option_name LIKE 'old_gallery_%' ORDER BY LENGTH(option_value) DESC LIMIT 10;"
+---------------------------+----------+------+
| option_name               | autoload | mb   |
+---------------------------+----------+------+
| old_gallery_cache         | yes      | 3.70 |
| old_gallery_settings      | yes      | 0.05 |
+---------------------------+----------+------+

What it means: This is likely dead weight if the plugin is inactive and you’re not planning to re-enable it.

Decision: Export the rows, then delete. If nervous, set autoload=no first and watch for a week.

Task 13: Back up specific rows (surgical rollback)

cr0x@server:~$ wp db query "SELECT * FROM wp_options WHERE option_name IN ('old_gallery_cache','old_gallery_settings')\G"
*************************** 1. row ***************************
option_id: 8812
option_name: old_gallery_cache
option_value: a:3:{s:7:"version";s:5:"1.9.0";...}
autoload: yes
*************************** 2. row ***************************
option_id: 8813
option_name: old_gallery_settings
option_value: a:2:{s:5:"theme";s:6:"light";...}
autoload: yes

What it means: You’ve captured the exact values. If deletion causes issues, you can reinsert them.

Decision: Proceed to delete only after you have either this output saved or a DB snapshot.

Task 14: Delete known-orphan options (commit)

cr0x@server:~$ wp db query "DELETE FROM wp_options WHERE option_name IN ('old_gallery_cache','old_gallery_settings') LIMIT 2;"
Query OK, 2 rows affected (0.01 sec)

What it means: Rows are gone. If the plugin is truly unused, no impact. If it was still referenced, you’ll see warnings/errors quickly.

Decision: Validate site functionality; monitor error logs and slow queries.

Task 15: Inspect rewrite_rules size (common, sometimes legitimate)

cr0x@server:~$ wp option get rewrite_rules --format=json | head -c 120
{"^wp-json/?$":"index.php?rest_route=/","^product/(.+?)/?$":"index.php?product=$matches[1]","^..."}

What it means: rewrite_rules is normally autoloaded and can grow with complex routing (e-commerce, multilingual, etc.). Don’t blindly disable it; that can slow routing or cause odd behavior.

Decision: If it’s enormous, you likely have a plugin generating excessive rewrite rules. Fix the source, not just the symptom. Consider flushing rewrite rules safely during maintenance.

Task 16: Flush rewrite rules (only when you know why)

cr0x@server:~$ wp rewrite flush --hard
Success: Rewrite rules flushed.

What it means: WordPress regenerated rewrite rules. This can shrink the option if stale rules accumulated, but it can also regenerate the same size if the generator is still noisy.

Decision: If it shrinks, great. If it grows back, hunt the plugin/theme generating too many rules.

Task 17: Check whether object cache is enabled (it changes the blast radius)

cr0x@server:~$ wp plugin list --status=active | grep -E "redis|memcached|object"
redis-cache             active   none      2.5.3

What it means: Persistent object caching is likely in play. That helps, but it doesn’t excuse a 30 MB autoload payload—especially during cache flushes and evictions.

Decision: Keep cleaning. Also check cache eviction and memory sizing to reduce churn.

Task 18: Measure request-level impact via a simple before/after TTFB sample

cr0x@server:~$ for i in {1..5}; do curl -s -o /dev/null -w "%{time_starttransfer}\n" https://example.com/; done
0.842
0.790
0.811
0.775
0.798

What it means: This is not a lab-grade benchmark, but it catches big wins. Repeat after changes (and after cache clears) to see if cold responses improve.

Decision: If TTFB drops meaningfully and stays stable, you’re fixing the right thing. If it doesn’t, your bottleneck is elsewhere—continue diagnosis (slow queries, CPU, PHP, external API calls).

Safe changes: what you can delete vs. what you should tame

Tier 1: Usually safe to delete (with backups)

  • Expired transients: They’re expired. They should not be required.
  • Orphaned plugin options: For plugins you have removed or will never re-enable.
  • Old cron locks: If stuck (handle carefully), but first confirm the site isn’t mid-job.

Even “safe” needs a backup. Deleting rows is permanent, and you’re one tired on-call away from deleting the wrong prefix.

Tier 2: Prefer switching autoload from yes → no

  • Generated caches stored as options: If they’re big but the plugin can regenerate them.
  • Large UI configs: Widget or builder settings that aren’t needed on every front-end request.
  • Analytics and log-like blobs: These shouldn’t be in options at all, but de-autoloading is a safer first move.

Why is this safer? Because the data is still there. You’re changing “when it loads,” not “whether it exists.” Most plugins will just call get_option() when needed and continue happily.

Tier 3: Don’t touch casually

  • siteurl, home, active_plugins: self-explanatory. If you break these, you’re earning your downtime.
  • cron: large is suspicious, but deleting it can drop scheduled jobs. Better to fix whatever is spamming cron events.
  • rewrite_rules: autoloaded for a reason; big indicates upstream complexity.

Short joke #2: Deleting active_plugins to speed up WordPress is a performance optimization with the side effect of “no website.”

The real goal: reduce “autoloaded hotset”

Think like a cache engineer. Autoload is a hotset: the data you pull into memory early. The hotset should be small, stable, and actually hot. Everything else should be lazy-loaded or cached properly (object cache, transient with TTL, or a dedicated table).

Three corporate-world mini-stories from the trenches

Mini-story 1: The incident caused by a wrong assumption

A mid-sized content business ran WordPress behind a reverse proxy and a CDN. The site looked fast in synthetic checks because the homepage was always cached. The leadership team concluded the origin didn’t matter much. It was “just admin” that felt slow, which didn’t show up on dashboards.

Then they launched a membership feature. Logged-in users bypassed most CDN caching, and the origin became the product. On launch day, the site didn’t crash. It merely became a slow-motion disaster: profile pages took seconds; checkout calls timed out; support tickets arrived before the post-mortem template did.

The wrong assumption was subtle: “Database is fine because CPU is low.” In reality, autoloaded options had grown into a multi-megabyte payload. Each new PHP worker cold-start had to pull the whole thing, and any cache miss hit the database with a fat query followed by fat deserialization.

The fix was boring and effective: measure autoload size, turn off autoload for the biggest plugin cache entries, and delete orphaned options from two retired plugins. The biggest improvement wasn’t average latency; it was p95 and p99 during bursts and deploys. They stopped “feeling random” because they weren’t random.

Mini-story 2: The optimization that backfired

An enterprise marketing team wanted faster page loads, so they added a plugin that promised “database query reduction.” It did reduce query count—by aggressively caching computed data into a single option and setting it to autoload. The plugin authors probably ran it on a small brochure site and declared victory.

In production, the option grew with every campaign variation and localization. The payload ballooned. The query count went down. The memory footprint went up. Time-to-first-byte got worse, especially on cache flushes after deploys.

The optimization backfired because it optimized the wrong metric. The team celebrated fewer queries in a debugging panel while customers watched spinning loaders. In ops terms: they optimized the dashboard, not the system.

They eventually replaced the plugin’s “cache in options” approach with a persistent object cache and set that option to autoload=no. Query count went up slightly. Latency dropped. Everyone pretended that was the original plan.

Mini-story 3: The boring but correct practice that saved the day

A financial services group ran WordPress as a content platform adjacent to transactional systems. Their biggest fear wasn’t a slow blog post; it was an incident that distracted the on-call team during a real outage elsewhere.

They implemented two mundane practices: (1) a weekly job that recorded autoload size and top 20 offenders to a time series, and (2) a change-control rule that any plugin installation must pass a staging soak with autoload growth checks.

Months later, a plugin update introduced a new autoloaded option that started growing quickly. The weekly trend caught it before it became user-visible. They rolled back, opened a vendor ticket, and pinned the version. No incident call. No late-night heroics. Just a quiet correction that nobody outside SRE ever heard about.

That’s the point. The best incident is the one that never earns a Slack channel.

Common mistakes: symptom → root cause → fix

1) Admin is slow, front-end “seems fine”

Symptom: wp-admin pages hang; editors complain; front-end looks okay (thanks, CDN).

Root cause: Autoload bloat punishes uncached, authenticated traffic. Admin pages also load more plugins and settings.

Fix: Measure autoload MB; disable autoload for large plugin caches; clean orphaned options. Validate with logged-in curl tests or real browser timings.

2) Random latency spikes after deploys or scaling events

Symptom: Right after a deployment or pod restart, latency spikes then settles.

Root cause: Cold workers + empty object cache mean the autoload payload must be fetched and deserialized repeatedly.

Fix: Reduce autoload payload; ensure persistent object cache sizing; consider warm-up scripts after deploy that hit key endpoints.

3) Database CPU is low but queries “feel slow”

Symptom: No obvious DB CPU saturation; still slow.

Root cause: Not all pain is CPU. Big option values mean bigger packets, more memory copies, and PHP unserialize overhead.

Fix: Reduce payload size; inspect top offenders; check PHP worker memory/CPU during cold requests.

4) You deleted an option and a plugin “mysteriously” broke

Symptom: Feature disappears; settings reset; fatal errors.

Root cause: You deleted state that wasn’t regenerable, or the plugin author assumed the option always exists.

Fix: Restore from backup; prefer autoload=no changes first; for deletions, remove only known transients and confirmed orphaned options.

5) OPTIMIZE TABLE made things worse

Symptom: IO spikes, replication lag, or slowdowns during “maintenance.”

Root cause: OPTIMIZE doesn’t address autoload semantics; it can rebuild tables and generate heavy IO, especially on large datasets.

Fix: Don’t use OPTIMIZE as a first-line fix. Reduce autoload payload and garbage rows first. Schedule table maintenance intentionally and sparingly.

6) Autoload size drops, but performance doesn’t improve

Symptom: You cut autoload from 20 MB to 5 MB; users still complain.

Root cause: The bottleneck is elsewhere: slow external API calls, slow filesystem, PHP CPU bound, missing indexes on postmeta, or a saturated DB connection pool.

Fix: Run slow query log; profile PHP; check disk latency; validate object cache health; measure end-to-end with real traffic patterns.

Checklists / step-by-step plan

Phase 0: Safety rails (do this before touching data)

  • Confirm table prefix via WP-CLI (Task 1).
  • Take a database snapshot or dump (full DB if possible; at minimum wp_options).
  • Have a rollback plan: restore dump, or reinsert saved rows.
  • Decide your maintenance posture: live changes with tight monitoring, or a short maintenance window.

Phase 1: Baseline and target selection

  • Measure autoload MB (Task 2).
  • List top 20 autoloaded options by size (Task 3).
  • Classify each top option into: delete, de-autoload, leave.
  • Write down expected impact and risk for each change. If you can’t explain what an option is, don’t delete it.

Phase 2: Low-risk cleanup (wins that almost never hurt)

  • Delete expired transients (Tasks 5–6).
  • Remove orphaned options from plugins you have removed (Tasks 11–14).
  • Re-measure autoload MB and top offenders after each change batch.

Phase 3: High-impact tuning (where the real gains are)

  • For each giant non-core option: set autoload=no first (Task 9).
  • Test key paths immediately after each change (homepage, login, checkout, search, wp-admin).
  • Watch PHP error logs and application logs for notices/warnings/fatals.
  • Only delete large options when you’re confident they are regenerable or truly orphaned.

Phase 4: Verify improvement the right way

  • Measure autoload MB final state.
  • Capture before/after TTFB samples (Task 18) and compare p95/p99 in your APM if available.
  • Simulate a cold start: restart PHP-FPM and clear object cache in staging, then re-test (don’t do this casually in prod).

Phase 5: Make it stick

  • Add monitoring for autoload size and top offenders.
  • Gate plugin installs/updates with an autoload growth check in staging.
  • Educate: autoload is not a trash compactor.

Prevention: stop autoload bloat from coming back

1) Treat plugin installs like code changes

If your org has a change process for code, but not for plugins, congratulations: your plugins are now your unreviewed code. Many of the worst autoload offenders come from plugins that “cache” by stuffing massive arrays into options.

Practical policy: every plugin update gets a staging run where you record autoload MB before and after typical workflows (admin save, product import, cache warm-up). If autoload grows materially, you block or mitigate.

2) Prefer persistent object caching, but don’t let it hide the rot

Redis/Memcached reduces repetitive DB hits. It does not make it okay to autoload 30 MB. It just changes when you pay.

What you want is a small autoload hotset plus a cache for computed data with sane TTL and eviction behavior.

3) Put size limits in your operational playbook

Decide a threshold that triggers action. For many production WordPress sites, a reasonable starting line is:

  • Warn: autoload > 5 MB
  • Page someone (during business hours): autoload > 10–15 MB
  • Incident criteria: autoload > 20 MB plus correlated latency spikes

4) Ask “should this be in options?”

Some data types don’t belong in options:

  • Logs (use files, a logging service, or a custom table)
  • Large computed indexes (use transients with object cache or custom tables)
  • Per-user/per-session state (use user meta, sessions, or dedicated storage)

5) Watch for cron and action scheduler growth

The cron option can get big if events accumulate or a plugin schedules too many hooks. WooCommerce also has action scheduling tables (not wp_options) that can become their own cleanup project. Don’t mix them up; diagnose separately.

FAQ

1) How big should autoloaded options be?

Ideally under 1 MB. Under 5 MB is often fine. Above 10 MB is usually worth fixing. Above 20 MB is a reliability risk during cold starts.

2) Will disabling autoload break my site?

It can, but it’s generally safer than deleting. Disabling autoload means the option still exists; code can fetch it when needed. The risk is plugins that incorrectly assume the option is preloaded. Mitigate by changing one option at a time and testing critical flows.

3) Is it safe to delete transients?

Expired transients are safe to delete in normal circumstances. Non-expired transients are usually safe too (they should be regenerable), but some plugins misuse them as persistent storage. If you want minimal risk, delete expired transients first.

4) Why is rewrite_rules so large?

Complex routing (products, languages, custom post types) increases rewrite rules. Some plugins generate excessive rules or fail to clean up. If it’s huge, investigate what’s adding routes; flushing rewrite rules may help temporarily, but fixing the generator is the real solution.

5) Do I need to run OPTIMIZE TABLE after cleanup?

Not automatically. InnoDB doesn’t always benefit the way people expect, and OPTIMIZE can be disruptive. First reduce the data and observe performance. If you need to reclaim disk space or defragment significantly, schedule it deliberately.

6) I have Redis object cache. Why do I still care about autoload?

Because caches miss, restart, evict, and flush. Autoload bloat makes those normal events expensive. Also, big autoload data still needs to be unserialized and handled in PHP.

7) What if the biggest autoload option is from a must-have plugin?

Set it to autoload=no and test. If the plugin still behaves, you win. If it breaks, you have three options: tune the plugin settings to reduce stored data, replace the plugin, or accept the cost and mitigate with caching and capacity.

8) How do I know whether an option is safe to delete?

If it’s clearly a cache (often named like cache, transient-like, or includes timestamps) and can be regenerated, deletion is usually safe. If it’s configuration, license keys, or core settings, don’t delete. When unsure: back it up and set autoload=no first.

9) Can autoload bloat cause out-of-memory errors in PHP?

Yes. Large autoload payloads increase memory usage per request, especially when multiple copies exist during unserialize/processing. If you see OOM kills or PHP fatal memory errors correlated with cache misses, autoload size is a prime suspect.

10) How often should I audit wp_options?

Monthly for stable sites, weekly for busy e-commerce or plugin-heavy installations. If you frequently deploy or update plugins, monitor continuously and alert on growth.

Next steps you can do today

Do three things, in order:

  1. Measure autoload MB (Task 2) and write it down. If you don’t track it, you can’t stop regressions.
  2. Take the top offender (Task 3) and set autoload=no (Task 9). Test core flows immediately.
  3. Clean the easy garbage: expired transients and orphaned plugin options (Tasks 5–6, 11–14).

After that, treat autoload like a performance budget. Keep it small. Keep it boring. Your future self—probably the one on-call at 2 a.m.—will appreciate it.

← Previous
Ubuntu 24.04 servers: Snap vs apt — where Snap quietly causes pain (and what to do)
Next →
WordPress 504 Gateway Timeout: Is It the Database or PHP? How to Prove Which One

Leave a comment