WordPress “Database Needs Repair”: What It Means and How to Fix It Safely

Was this helpful?

The banner shows up at the worst time: traffic is spiking, an editor is trying to publish, and WordPress politely informs you the
“database needs repair.” Translation: something about your site’s data layer is unhealthy enough that WordPress can’t trust it.

If you treat this like a “click the button” problem, you can absolutely turn a recoverable hiccup into real data loss.
If you treat it like a production incident—verify, snapshot, repair in the right order—you’ll usually be back in minutes.

What WordPress is actually saying

WordPress throws “database needs repair” when it believes one or more tables are damaged or inconsistent enough that normal queries
are failing. The immediate trigger is usually an error returned by MySQL/MariaDB during startup or during a routine query: a table
marked as crashed, a missing index, a corrupted page, an unexpected EOF, or a mismatch between what WordPress expects and what it
can read.

Under the hood, WordPress uses a small check in its database layer and, if it sees certain failures, it suggests a repair flow
using a built-in script (wp-admin/maint/repair.php). This can run SQL operations like REPAIR TABLE and
OPTIMIZE TABLE—which is fine for some storage engines and almost irrelevant for others.

Here’s the key operational truth: WordPress can detect that the database is unhappy, but it cannot diagnose whether you have
logical corruption (bad data), physical corruption (broken pages on disk), or availability issues
(timeouts, disk full, crash recovery looping). You must determine which one you’re holding.

One good quote to keep in your head during incidents: “Hope is not a strategy.” — Vince Lombardi. It’s blunt, but it fits.

Fast diagnosis playbook (first/second/third)

When the site is down, you don’t have time to admire the ruins. You need to identify the bottleneck quickly: database engine health,
table-level damage, or the storage beneath it.

First: confirm it’s not just a connectivity or credential issue

  • Check web server error logs for mysqli_real_connect failures, authentication errors, DNS resolution problems,
    or Too many connections.
  • Confirm DB_NAME, DB_USER, DB_PASSWORD, DB_HOST in wp-config.php
    match reality (and weren’t rotated without telling WordPress).
  • Try a simple DB ping from the WordPress host. If you can’t connect, “repair” won’t even start.

Second: check database error logs for the real complaint

  • Look for crash recovery messages, InnoDB corruption warnings, “table is marked as crashed,” “page checksum mismatch,” or “disk full.”
  • If InnoDB is crash-recovering in a loop, do not spam restart. You can dig the hole deeper.

Third: establish whether the underlying storage is lying to you

  • Check disk space and inode exhaustion. A “repair needed” is sometimes “we couldn’t write the last transaction.”
  • Check kernel logs for I/O errors, filesystem remounts, RAID degradation, or a cloud volume having a bad day.
  • If the storage layer is unstable, repairing tables is like repainting a house while it’s on fire.

Interesting facts and context (the stuff that matters)

  1. WordPress historically supported MyISAM heavily; modern WordPress installs typically run on InnoDB by default in MySQL/MariaDB.
    That matters because “repair” is a MyISAM-centric concept.
  2. The built-in WordPress repair script predates the era when managed hosts made DB access “someone else’s problem”—and it assumes you
    can run repair operations from the application tier.
  3. InnoDB uses a redo log and crash recovery; after a power loss, it often self-heals without any table-level repair, if the disk is fine.
  4. MyISAM tables can be “marked as crashed” after an unclean shutdown because the engine doesn’t have transactional crash recovery
    like InnoDB does.
  5. OPTIMIZE TABLE can lock tables for substantial time depending on engine and MySQL/MariaDB version—an “easy fix” can become
    an outage multiplier.
  6. A WordPress site can appear broken due to corruption in just one table: wp_options. That table is a hot spot for plugins
    and often the first to show pain.
  7. The error message is sometimes misleading: a query failing with “incorrect string value” or “illegal mix of collations” can bubble up
    in ways that look like “corruption,” but it’s really a charset/collation mismatch after an upgrade or restore.
  8. InnoDB “corruption” warnings sometimes come from underlying storage returning stale or wrong data (misbehaving cache, controller,
    or flaky virtual disk). Databases are great at detecting lies, not at fixing them.

Common causes and failure modes

1) Unclean shutdowns and crash recovery

Power loss, kernel panic, OOM kills, forced restarts by a hosting platform—databases hate surprises. InnoDB is designed to recover,
but recovery itself needs functional disk I/O and enough free space for logs and temporary work. MyISAM is more fragile; it may mark
tables as crashed and require explicit repairs.

2) Disk full (or inode full) masquerading as “corruption”

When the filesystem hits 100%, writes fail. InnoDB can get stuck with partially written operations or unable to extend its tablespace.
WordPress then sees query failures and suggests repair.

3) Actual on-disk corruption

This is the real one: bad sectors, broken RAID, misconfigured network block storage, or a filesystem that took damage. InnoDB will
complain about checksum mismatches or “page corruption.” At that point, “repair” from WordPress is mostly theater. You need backups,
recovery tactics, and sometimes a controlled extraction.

4) Plugin behavior and oversized rows

Some plugins treat the database like a junk drawer: huge serialized blobs in wp_options, endless transients, or log tables
that balloon until queries time out. You can see errors that look like the database is broken, but it’s just drowning.

5) “Optimization” operations gone wrong

OPTIMIZE TABLE, schema changes, and bulk deletions can trigger massive I/O, temporary disk usage, and long locks.
In shared environments, you can starve the database long enough that WordPress falls apart and claims it needs repair.

Joke #1: “Database needs repair” is WordPress’s way of saying, “I’m fine, but the thing I depend on is having a character-building moment.”

Hands-on tasks: commands, expected output, and decisions

Below are practical tasks you can run on a typical Linux host with MySQL/MariaDB and WordPress. Each task includes commands, example
output, what the output means, and the decision you make next. Copy/paste carefully; production systems are allergic to guesswork.

Task 1: Confirm WordPress can resolve and reach the DB host

cr0x@server:~$ getent hosts db.internal
10.20.30.40    db.internal

What it means: Name resolution works and you have an IP.

Decision: If this fails, fix DNS/hosts/VPC routing before touching WordPress repair tools.

Task 2: Basic TCP connectivity to the DB port

cr0x@server:~$ nc -vz db.internal 3306
Connection to db.internal 3306 port [tcp/mysql] succeeded!

What it means: Network path and security groups/firewall allow DB connections.

Decision: If blocked, stop. Table repairs won’t help a closed port.

Task 3: Validate DB credentials from wp-config.php

cr0x@server:~$ php -r 'require "wp-config.php"; echo DB_HOST, " ", DB_NAME, " ", DB_USER, PHP_EOL;'
db.internal wordpressdb wpuser

What it means: You’ve extracted the configured connection targets.

Decision: If these don’t match expected values (recent secret rotation, migrated DB), fix config first.

Task 4: Attempt a simple MySQL login and query

cr0x@server:~$ mysql -h db.internal -u wpuser -p -e "SELECT 1;"
Enter password: 
1
1

What it means: Authentication works and the server is responding.

Decision: If you get “Access denied,” repair is irrelevant—fix credentials/privileges.

Task 5: Check free disk and inode headroom on the DB host

cr0x@server:~$ df -h /var/lib/mysql
Filesystem      Size  Used Avail Use% Mounted on
/dev/nvme0n1p3  200G  198G  2.0G  99% /var/lib/mysql

What it means: You’re one temp table away from a bad time.

Decision: Free space first (logs, old backups, binlogs) before any repair/optimize operation.

cr0x@server:~$ df -i /var/lib/mysql
Filesystem      Inodes  IUsed   IFree IUse% Mounted on
/dev/nvme0n1p3  13M     13M     2K   100% /var/lib/mysql

What it means: Inode exhaustion can break writes even if “space” looks fine.

Decision: Clean up excessive small files (tmp, logs). If the DB can’t create files, repairs can fail mid-flight.

Task 6: Read MySQL/MariaDB error logs for the actual failure

cr0x@server:~$ sudo tail -n 60 /var/log/mysql/error.log
2025-12-27T07:10:12.002345Z 0 [Warning] InnoDB: Database page corruption on disk or a failed file read of page [page id: space=123, page number=456]
2025-12-27T07:10:12.002400Z 0 [ERROR] InnoDB: Page checksum mismatch on read.
2025-12-27T07:10:12.002430Z 0 [ERROR] InnoDB: Plugin initialization aborted with error Data structure corruption

What it means: This is not “run REPAIR TABLE.” This is storage integrity territory.

Decision: Stop making writes. Take a snapshot/backup. Plan for recovery/extraction from backups or controlled salvage.

Task 7: Check for MyISAM tables marked as crashed

cr0x@server:~$ mysql -h db.internal -u root -p -e "SELECT TABLE_SCHEMA,TABLE_NAME,ENGINE,TABLE_ROWS,CREATE_OPTIONS FROM information_schema.TABLES WHERE TABLE_SCHEMA='wordpressdb' AND ENGINE='MyISAM';"
Enter password:
+-------------+-------------------+--------+------------+----------------+
| TABLE_SCHEMA | TABLE_NAME        | ENGINE | TABLE_ROWS | CREATE_OPTIONS |
+-------------+-------------------+--------+------------+----------------+
| wordpressdb  | wp_search_cache   | MyISAM |      12034 |                |
+-------------+-------------------+--------+------------+----------------+

What it means: You have at least one MyISAM table. Those can legitimately require REPAIR TABLE.

Decision: Identify whether the error message specifically references these tables. If yes, repair them—after backing up.

Task 8: Run a table check to locate corruption (fast, low risk)

cr0x@server:~$ mysqlcheck -h db.internal -u root -p --databases wordpressdb
Enter password:
wordpressdb.wp_posts                                 OK
wordpressdb.wp_options                               OK
wordpressdb.wp_search_cache                          error    : Table is marked as crashed and should be repaired
wordpressdb.wp_search_cache                          status   : Operation failed

What it means: One table is explicitly marked as crashed.

Decision: Repair only the affected table(s). Don’t “optimize everything” out of boredom.

Task 9: Back up before repair (logical dump)

cr0x@server:~$ mysqldump -h db.internal -u root -p --single-transaction --routines --triggers wordpressdb | gzip -1 > /tmp/wordpressdb.sql.gz
Enter password:

What it means: You’ve taken a consistent logical backup (best-effort; may fail if tables are unreadable).

Decision: If the dump fails on a specific table, note it. Consider dumping table-by-table to salvage what you can.

Task 10: Repair a MyISAM table directly (targeted)

cr0x@server:~$ mysql -h db.internal -u root -p -e "REPAIR TABLE wordpressdb.wp_search_cache;"
Enter password:
+------------------------------+--------+----------+----------+
| Table                        | Op     | Msg_type | Msg_text |
+------------------------------+--------+----------+----------+
| wordpressdb.wp_search_cache  | repair | status   | OK       |
+------------------------------+--------+----------+----------+

What it means: The table repair succeeded.

Decision: Re-run mysqlcheck. If it’s clean, bring the site back and monitor logs for recurrence.

Task 11: If WordPress is available, use WP-CLI to check/repair (safer than web repair exposure)

cr0x@server:~$ cd /var/www/html
cr0x@server:~$ wp db check
Success: Database checked.
cr0x@server:~$ wp db repair
Success: Database repaired.

What it means: WP-CLI successfully executed the DB maintenance steps WordPress knows about.

Decision: If WP-CLI fails with SQL errors, go back to MySQL logs and targeted checks. Don’t keep retrying blindly.

Task 12: Inspect the worst offender table size and growth (often wp_options)

cr0x@server:~$ mysql -h db.internal -u root -p -e "SELECT table_name, ROUND((data_length+index_length)/1024/1024,1) AS mb FROM information_schema.TABLES WHERE table_schema='wordpressdb' ORDER BY (data_length+index_length) DESC LIMIT 10;"
Enter password:
+----------------+------+
| table_name     | mb   |
+----------------+------+
| wp_options     | 512.4|
| wp_posts       | 240.7|
| wp_postmeta    | 210.3|
| wp_actionscheduler_logs | 180.9|
+----------------+------+

What it means: Your options table is huge, which often correlates with slow queries, timeouts, and “repair” symptoms.

Decision: Investigate autoloaded options and plugin behavior. This is remediation, not repair.

Task 13: Identify autoload bloat in wp_options

cr0x@server:~$ mysql -h db.internal -u root -p wordpressdb -e "SELECT COUNT(*) AS autoloaded_rows, ROUND(SUM(LENGTH(option_value))/1024/1024,2) AS autoload_mb FROM wp_options WHERE autoload='yes';"
Enter password:
+----------------+-------------+
| autoloaded_rows| autoload_mb |
+----------------+-------------+
|          12456 |       88.40 |
+----------------+-------------+

What it means: WordPress loads autoloaded options on many requests. 88 MB is not “fine.” It’s a denial-of-service you’re hosting yourself.

Decision: Reduce autoloaded options (plugin settings, transients) and add caching. Don’t run table “repair” and call it solved.

Task 14: Check for long-running queries or lock waits (repair attempts can worsen this)

cr0x@server:~$ mysql -h db.internal -u root -p -e "SHOW FULL PROCESSLIST\G"
Enter password:
*************************** 1. row ***************************
     Id: 8123
   User: wpuser
   Host: 10.20.40.55:52111
     db: wordpressdb
Command: Query
   Time: 128
  State: Waiting for table metadata lock
   Info: OPTIMIZE TABLE wp_postmeta

What it means: Someone is optimizing and blocking others. This can make WordPress look “broken.”

Decision: Stop/kill the blocking operation if appropriate, then schedule maintenance during a window with a real plan.

Task 15: Verify InnoDB engine health quickly

cr0x@server:~$ mysql -h db.internal -u root -p -e "SHOW ENGINE INNODB STATUS\G" | sed -n '1,120p'
Enter password:
------------
TRANSACTIONS
------------
Trx id counter 987654321
Purge done for trx's n:o < 987650000 undo n:o < 0 state: running but idle
History list length 12

What it means: InnoDB is running and not obviously stuck with a massive history list or active rollback.

Decision: If status shows crash recovery, repeated corruption, or “waiting for” I/O, focus on storage and recovery options.

Task 16: Basic storage error signals (don’t skip this)

cr0x@server:~$ sudo dmesg -T | tail -n 20
[Sat Dec 27 07:08:10 2025] blk_update_request: I/O error, dev nvme0n1, sector 123456789
[Sat Dec 27 07:08:10 2025] EXT4-fs error (device nvme0n1p3): ext4_find_entry:1463: inode #262144: comm mysqld: reading directory lblock 0

What it means: Your database is reading garbage because the disk is failing or the filesystem is damaged.

Decision: Stop writes, snapshot if possible, migrate to healthy storage, and restore from backups. “Repair tables” is not the fix.

Using WP_ALLOW_REPAIR safely (and not turning it into an open bar)

WordPress includes a built-in repair endpoint at wp-admin/maint/repair.php. It’s disabled by default and can be enabled
by setting define('WP_ALLOW_REPAIR', true); in wp-config.php.

The good: it’s simple and can repair certain table issues, mostly around MyISAM. The bad: it is an HTTP-accessible maintenance page,
and WordPress explicitly warns that it does not require login when enabled. That means if you leave it on, you’re inviting strangers
to poke at your database maintenance functions. They may not steal data, but they can absolutely create load and lock tables.

Safe usage pattern

  • Prefer WP-CLI (wp db check / wp db repair) if you have shell access.
  • If you must use the web repair page, enable WP_ALLOW_REPAIR briefly, run the repair, then remove it immediately.
  • Restrict access at the web server level (IP allowlist) while it’s enabled.

Example: temporarily enable repair

cr0x@server:~$ sudo sed -i "s/^.*WP_ALLOW_REPAIR.*$/define('WP_ALLOW_REPAIR', true);/g" /var/www/html/wp-config.php

What it means: You’ve enabled repair mode (this assumes a line already existed; be careful with automated edits).

Decision: Immediately add an IP restriction or take the site behind a maintenance gate while you run the repair.

Example: lock repair endpoint to your IP (Nginx)

cr0x@server:~$ sudo nginx -T | sed -n '1,80p'
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

What it means: Config syntax is valid before you reload.

Decision: Add a location block for /wp-admin/maint/repair.php and restrict it, then reload Nginx.

Joke #2: Leaving WP_ALLOW_REPAIR enabled is like leaving your car running outside the office—technically convenient, socially ambitious.

InnoDB vs MyISAM repairs: very different beasts

MyISAM: REPAIR TABLE is real

If the affected table is MyISAM, REPAIR TABLE can rebuild indexes and fix certain structural issues. It may still fail if the
data file is badly damaged, but it’s a legitimate first response after you’ve backed up what you can.

MyISAM corruption commonly appears as “Table is marked as crashed” or “Can’t open file.” Repairs can be fast for small tables and brutal
for large ones. Also, MyISAM uses table-level locks; repairs can block reads/writes.

InnoDB: “repair” often means “recover”

With InnoDB, REPAIR TABLE doesn’t do what people think. InnoDB relies on crash recovery, redo logs, and checksums. When InnoDB
reports corruption, your best “repair” is usually:

  • Confirm storage health (I/O errors, filesystem health, cloud volume events).
  • Restore from a known-good backup/snapshot if available.
  • If you must salvage, use controlled extraction: dump what reads cleanly, isolate damaged tables, and rebuild from schema + surviving data.

Don’t confuse “optimize” with “repair”

Many WordPress guides treat OPTIMIZE TABLE like a magic broom. In practice, it can:
lock tables, rewrite them, require huge temporary space, and spike I/O hard enough to surface latent storage issues.
It’s a maintenance operation, not emergency medicine.

Storage and hosting angles (where corruption often starts)

As an SRE, I’ve learned to assume the application is the messenger and storage is the crime scene. Database corruption is frequently
downstream of infrastructure events you didn’t notice: a noisy neighbor saturating IOPS, a node reboot, a filesystem remounting read-only,
or a volume that briefly returned errors and then pretended nothing happened.

Look for these storage patterns

  • Disk full: easy to confirm, easy to fix, surprisingly common.
  • Read-only filesystem: sudden failures across the stack; MySQL may stop or refuse writes.
  • I/O errors: kernel logs show them; DB logs show checksum mismatches; WordPress shows “repair.”
  • Latency spikes: can masquerade as corruption when queries time out or clients disconnect mid-operation.

Practical storage sanity checks (Linux)

cr0x@server:~$ lsblk -o NAME,SIZE,TYPE,MOUNTPOINT,FSTYPE
NAME         SIZE TYPE MOUNTPOINT     FSTYPE
nvme0n1      500G disk
└─nvme0n1p3  200G part /var/lib/mysql ext4

What it means: Confirms what device backs the MySQL datadir.

Decision: If this is a network volume or ephemeral disk, adjust your expectations and your backup posture.

cr0x@server:~$ mount | grep /var/lib/mysql
/dev/nvme0n1p3 on /var/lib/mysql type ext4 (rw,relatime,errors=remount-ro)

What it means: Filesystem will remount read-only on errors (common default).

Decision: If you see it remounted read-only, you’re not repairing anything until you fix the filesystem/disk.

Three corporate mini-stories from the trenches

Incident #1: The outage caused by a wrong assumption

A mid-sized company ran WordPress for a marketing site that suddenly started showing “database needs repair.” The on-call engineer saw
the message, remembered the WordPress repair endpoint, enabled WP_ALLOW_REPAIR, clicked “Repair and Optimize,” and waited.

The database got slower. Then it got unavailable. The site that was partially degraded became fully down, and the monitoring started
screaming about connection errors. The engineer assumed “it’s repairing, give it time.” That assumption turned out to be expensive.

The root cause wasn’t corruption at all. The database disk was 99% full. The optimize step tried to rebuild a couple of large tables,
created temporary files, and hit ENOSPC mid-operation. MySQL started failing more queries, and WordPress—trying to be helpful—kept
surfacing the same repair advice.

The fix was boring: free disk space, restart MySQL cleanly, and rerun a targeted check. They didn’t need optimization; they needed
capacity management. The post-incident change was even more boring: an alert at 85% disk usage and a runbook that said “free space
before repairs.” Not glamorous, but it stopped the repeat incident.

Incident #2: An optimization that backfired

Another team had a WordPress install with years of plugin churn. Someone noticed the database size growing and decided to “clean it up”
by running nightly optimizations across all tables. It looked responsible, like flossing. The first few nights were quiet.

Then traffic increased and the site started throwing intermittent errors at peak. Editors complained that saving drafts sometimes hung.
The database server showed high I/O utilization at odd hours, and daytime latency was getting worse. People blamed the hosting platform,
which is what people do when graphs look scary.

The issue: the optimize job was rewriting large InnoDB tables nightly, generating heavy background I/O, churning the buffer pool, and
occasionally holding metadata locks that collided with real traffic. In other words: the “maintenance” was competing with production,
and production lost.

They stopped the nightly optimize, then did targeted work: removed a plugin that was storing large autoloaded blobs, cleaned up
transients, and added a proper object cache. The database got smaller naturally over time because they stopped feeding it junk.
Optimization didn’t fix the underlying behavior; it only made the system sweat harder.

Incident #3: The boring but correct practice that saved the day

A large enterprise ran multiple WordPress properties. Nothing fancy—until an infrastructure patch triggered an unexpected reboot of a DB
VM during a busy content push. WordPress started showing database repair prompts, and a few pages returned errors.

The team didn’t touch the repair endpoint. They followed their runbook: freeze writes (maintenance mode), snapshot the volume,
take a fresh logical dump, and only then run checks. It felt slow in the moment. It was the right kind of slow.

The checks showed one small MyISAM legacy table marked as crashed (a leftover from an ancient plugin), and everything else was fine.
They repaired that table, validated the site, and brought it back. The snapshot meant that if the repair had made things worse, they
could roll back instantly.

The quiet hero was the practice nobody brags about in status updates: always take a recoverable point-in-time copy before you “fix”
anything. It turned a risky situation into a controlled maintenance task, and it kept the incident from becoming a recovery saga.

Common mistakes: symptoms → root cause → fix

1) Symptom: “Database needs repair” appears right after a migration

Root cause: Incomplete import, missing tables, wrong table prefix, or charset/collation mismatch causing queries to fail.

Fix: Verify table prefix in wp-config.php, confirm table count and presence of core tables, and check DB charset/collation. Re-import cleanly from dump.

2) Symptom: Repair runs, but the message comes back hours later

Root cause: Underlying storage issues, recurring unclean shutdowns, or a MyISAM table repeatedly corrupted by crashes.

Fix: Eliminate unclean shutdowns; move legacy MyISAM tables to InnoDB where possible; check kernel/disk logs and hosting events.

3) Symptom: Repair page hangs or times out

Root cause: Large tables, metadata locks, or the DB is saturated (I/O bound or CPU bound). Sometimes PHP max execution time ends it mid-run.

Fix: Prefer command-line tools (mysqlcheck, WP-CLI). Repair one table at a time. Schedule heavy operations off-peak.

4) Symptom: “Error establishing a database connection” plus repair prompt confusion

Root cause: Connection limits, DB down, DNS/network issues, or credential mismatch. Not corruption.

Fix: Fix connectivity and credentials. Check max_connections and application connection pooling behavior.

5) Symptom: MySQL restarts repeatedly after the repair warning

Root cause: InnoDB crash recovery failing due to corruption, disk full, or filesystem errors.

Fix: Stop thrashing. Preserve state. Check error logs and disk health. Restore from backup/snapshot or perform controlled salvage.

6) Symptom: Admin loads, but front-end throws random 500s and “repair” notices

Root cause: Autoload bloat, runaway plugin tables, slow queries timing out, or lock contention.

Fix: Identify biggest tables, investigate autoloaded options, remove/rotate log tables, add caching, and tune indexes where appropriate.

7) Symptom: You fixed it once and now it’s worse

Root cause: You ran optimize/repair operations on a failing disk or with no free space, making partial rewrites and escalating damage.

Fix: Restore to pre-repair snapshot/backup, stabilize storage and capacity, then re-attempt targeted repairs with a rollback plan.

Checklists / step-by-step plan

Emergency plan (site degraded or down)

  1. Stabilize: put WordPress into maintenance mode or otherwise reduce writes (disable cron, pause heavy jobs).
  2. Confirm connectivity: verify DB host reachable and credentials valid.
  3. Check capacity: confirm disk space and inodes on DB host; free space if needed.
  4. Read DB logs: identify whether this is MyISAM crash, InnoDB crash recovery, or disk I/O errors.
  5. Take a recoverable copy: snapshot volume if available; then attempt a mysqldump.
  6. Locate the failing tables: run mysqlcheck or targeted CHECK TABLE.
  7. Repair only what’s broken: use REPAIR TABLE for MyISAM; avoid broad optimize operations.
  8. Validate: re-run checks; load test key pages; watch error logs.
  9. Remove repair exposure: if you enabled WP_ALLOW_REPAIR, remove it immediately.
  10. Monitor: watch disk I/O, MySQL error log, and WordPress errors for 24–48 hours.

Stability plan (after you’ve recovered)

  1. Convert legacy tables: migrate MyISAM tables to InnoDB where feasible.
  2. Fix plugin data hygiene: reduce autoload bloat, rotate log tables, remove abandoned plugins.
  3. Implement backups you can restore: both snapshots and logical dumps, tested regularly.
  4. Add guardrails: disk space alerts, slow query logging, and a maintenance window policy for table rewrites.
  5. Rehearse recovery: do a restore drill. Once. You’ll sleep better forever.

FAQ

Does “database needs repair” always mean corruption?

No. It means WordPress encountered database errors consistent with damaged tables or inconsistent metadata. Disk full, lock contention,
timeouts, and credential issues can all mimic “repair-needed” symptoms.

Is it safe to click “Repair Database” in WordPress?

Sometimes. It’s relatively safe for MyISAM table issues and minor inconsistencies. It is not a substitute for backups, and it won’t fix
underlying storage corruption. If you can, take a snapshot/dump first.

Should I run “Repair and Optimize”?

In emergencies, usually no. Optimize can be heavy, lock-prone, and disk-hungry. Repair the specific broken tables first; optimize later
during a window, with free space and a rollback plan.

Why does this keep happening after I repair?

Repeated repairs typically point to recurring unclean shutdowns, a failing disk, or a fragile MyISAM table that’s still in use.
Another common cause is a plugin generating abusive writes and crashing the system under load.

Can WP-CLI fix this without enabling WP_ALLOW_REPAIR?

Yes. WP-CLI’s wp db check and wp db repair are the same idea but executed via CLI, which is usually safer
operationally than exposing a web endpoint.

What if the database won’t start at all?

Then WordPress can’t repair anything. Go to the database layer: check disk space, filesystem health, and MySQL error logs. If InnoDB
reports persistent corruption, restore from backups/snapshots or perform controlled data extraction.

Will converting tables to InnoDB prevent this?

It reduces one class of issues (MyISAM crash-marked tables) because InnoDB has crash recovery. It does not protect you from disk
corruption, disk full, bad migrations, or badly behaved plugins.

Can I repair from phpMyAdmin?

You can, but it’s not my first choice. Web DB tools are convenient and also great at timing out mid-operation. For repairs and checks,
CLI tools (mysqlcheck, mysql, WP-CLI) are more controllable and easier to audit.

What’s the safest backup to take before repair?

Best is a storage snapshot of the DB volume (fast rollback) plus a logical dump (mysqldump --single-transaction) for
portability. If corruption is suspected, expect dumps to fail on specific tables and salvage what you can.

Do I need to disable WordPress cron during repairs?

If the site is unstable, yes. Cron and background tasks can keep writing while you’re trying to stabilize, which complicates recovery.
Pause the noise, fix the database, then re-enable background work.

Next steps you should actually take

When WordPress says “database needs repair,” don’t treat it like a UI prompt. Treat it like a signal that the data layer is throwing
errors and your job is to identify which class: connectivity, capacity, engine-level crash recovery, or real corruption.

Your practical next steps:

  1. Run the fast diagnosis playbook: connectivity → DB logs → storage health.
  2. Back up (snapshot + dump) before any repair attempt.
  3. Use mysqlcheck or WP-CLI to identify the specific failing tables.
  4. Repair only what’s broken; avoid “optimize everything” during an incident.
  5. After recovery, fix the root cause: disk headroom, plugin data hygiene, crash patterns, and tested backups.

If you do this right, the “repair” message becomes a short maintenance event, not a weekend hobby.

← Previous
Proxmox I/O wait at 100%: find the noisy VM/container and stop host freezes
Next →
Textures and VRAM: why “Ultra” is sometimes just silly

Leave a comment