MySQL vs MariaDB: WooCommerce checkout lag—one setting fixes it, the other just masks it

Was this helpful?

Checkout should be boring. Click “Place order,” payment happens, receipt lands, everyone goes home. When checkout takes 6–20 seconds (or times out) it’s rarely “WordPress being WordPress.” It’s usually your database doing exactly what you told it to do—just not what you wanted.

I’ve watched teams spend weeks swapping MySQL for MariaDB (or back), adding CPU, caching pages that can’t be cached, even blaming payment gateways. Then a single InnoDB setting changes and the lag evaporates. The catch: there’s another setting people love to change that only masks the problem—until the day it turns into corrupted expectations and late-night restores.

What checkout lag really looks like in production

WooCommerce checkout “lag” is rarely one thing. It’s a bundle of small synchronous steps that turn into a big ugly wall-clock delay:

  • Cart totals and taxes recalculated.
  • Order row inserted (and sometimes re-updated several times).
  • Stock reductions and order item inserts.
  • Customer/session meta updates.
  • Payment plugin writes logs, adds metadata, maybe does remote calls.
  • Transactional emails queued or sent.

In the database, that becomes a short burst of writes that must be consistent. If your DB can’t absorb that burst—because the redo log is too small, fsync latency is high, locks pile up, or the buffer pool is undersized—you’ll see:

  • High TTFB on /checkout or /?wc-ajax=checkout.
  • Spikes in “Threads running.”
  • Lock waits in InnoDB.
  • CPU not necessarily pegged (the DB is waiting on I/O).
  • Payments succeed but the order confirmation page stalls (classic: web waits on DB commit, gateway already took money).

Operational framing: checkout is your write path. Your write path is your truth serum. Caches can hide a lot; they can’t hide commit latency forever.

The two settings: one fixes, one masks

The setting that actually fixes checkout lag (most of the time): redo log capacity

When checkout write bursts hit InnoDB, the redo log is the shock absorber. If it’s too small, InnoDB is forced into frequent checkpoints, flushing dirty pages aggressively. That turns your bursty “write and move on” pattern into “write, stall, flush, stall, checkpoint, stall.” Checkout becomes a queue.

MySQL 8+ approach (preferred): tune innodb_redo_log_capacity.

MariaDB and older MySQL approach: tune innodb_log_file_size (and innodb_log_files_in_group if applicable), understanding the operational implications.

Why this is the “fix” setting: it reduces forced checkpoint pressure during write bursts. It doesn’t cheat durability; it just lets InnoDB batch work intelligently.

The setting people change that mostly masks the problem: fsync/durability toggle

The placebo-with-teeth is:

  • innodb_flush_log_at_trx_commit (set to 2 or 0)
  • sometimes paired with sync_binlog set low (like 0)

Yes, it can make checkout faster because you stop paying the full fsync cost per transaction. But it’s not a performance fix; it’s a durability trade. On a crash or power loss, you can lose committed transactions (or replicate inconsistently if binlog durability is also relaxed). That’s not “tuning,” that’s choosing which kind of incident you prefer.

Use-case where it’s acceptable: ephemeral environments, non-critical stores, or cases where you’ve explicitly designed around it (battery-backed cache, reliable power, and you’re comfortable with a small window of loss).

Use-case where it will bite you: anything with real money, real inventory, chargebacks, or regulatory expectations. Which is most WooCommerce stores that pay salaries.

Joke #1: Setting innodb_flush_log_at_trx_commit=0 is like removing your seatbelt because it wrinkles your shirt.

What “one fixes, one masks” looks like on graphs

When redo log capacity is too small, you’ll often see:

  • Spikes in disk writes and fsync wait.
  • InnoDB checkpoint age hitting a ceiling.
  • Checkout latency correlating with background flushing.

When you “mask” with innodb_flush_log_at_trx_commit=2, you’ll see p95 latency improve—but you haven’t addressed flushing pressure. You’ve just delayed it and reduced fsync frequency. Under sustained load, the stalls return in a different shape (or show up as replication lag, recovery time, and unpleasant conversations).

MySQL vs MariaDB: where the differences actually matter

For WooCommerce, the question isn’t “which is faster in general.” It’s “which fails more predictably under my workload, with my plugins, on my storage.” MySQL and MariaDB share ancestry but diverged enough that “drop-in replacement” is marketing, not engineering.

1) Redo logging and checkpoint behavior: the boring hotspot

MySQL 8 introduced innodb_redo_log_capacity, which is operationally friendlier than the old “stop server, move ib_logfile*, restart” dance. MariaDB still commonly uses innodb_log_file_size (depending on version), and the operational path can be more manual.

In practice, this means MySQL 8 makes it easier to do the right thing: size redo logs for write bursts without planning a maintenance window that everyone “forgets” to schedule.

2) Thread handling under concurrency: MariaDB’s thread pool can help

MariaDB offers a thread pool in some builds/editions that can stabilize performance when you have many short-lived connections. WooCommerce behind PHP-FPM can create a churny connection pattern if you aren’t using persistent connections (and you often shouldn’t).

MySQL has its own concurrency handling improvements over time, but if your pain is “too many connections/threads” rather than I/O stalls, MariaDB’s thread pool can be a pragmatic win.

3) Query optimizer and performance schema differences: tooling affects outcomes

When you’re diagnosing checkout lag, you need credible visibility into waits, locks, and hot queries. MySQL 8’s performance schema and sys schema ergonomics are generally excellent for this kind of work. MariaDB’s instrumentation differs; you may need to rely more on information_schema, engine status, and targeted slow logs.

4) Replication behavior and defaults: don’t assume parity

If you run replicas for reads or failover, differences in GTID implementations, binlog formats, and default settings matter. The “masking” durability settings are especially dangerous here: you can get a checkout that “worked” on the primary but never safely landed in the binlog before a crash.

5) The real deciding factor: your storage and fsync reality

Both engines can be fast on good storage and miserable on bad storage. NVMe with sane write caching policy is a different universe from network-attached disks with unpredictable latency. Checkout lag is often the story of fsync on a sad disk.

Interesting facts and history that explain today’s weirdness

  • Fact 1: MariaDB forked from MySQL after Oracle acquired Sun, and compatibility has been “mostly” rather than “always.” That “mostly” is where outages live.
  • Fact 2: InnoDB wasn’t originally “the MySQL engine.” It became the default because it actually survives real workloads: crash recovery, row-level locking, and transactions.
  • Fact 3: The old redo log sizing model (log file size × files in group) is why changing it historically required deleting/renaming ib_logfiles and restarting. MySQL 8 improved this with redo log capacity management.
  • Fact 4: innodb_flush_log_at_trx_commit defaults to 1 for a reason: it’s the durable setting. Performance pressure didn’t change physics; it just changed how many people are tempted to cheat.
  • Fact 5: WooCommerce’s core tables grew over time (order items, meta tables, analytics tables). The meta-table pattern is flexible but punishes poorly indexed reads and heavy writes.
  • Fact 6: Historically, WordPress defaulted to MyISAM in the ancient past. Many “WordPress is slow” myths come from that era; modern bottlenecks are different and more subtle.
  • Fact 7: MariaDB introduced Aria as a MyISAM replacement, but WooCommerce should not be mixing engines for transactional order writes unless you like mystery bugs.
  • Fact 8: The “query cache” used to be a MySQL feature people leaned on. It’s gone in MySQL 8 because it caused contention. If someone tells you to “enable query cache,” you’re hearing a ghost story.

Fast diagnosis playbook (first/second/third)

This is how you find the bottleneck without spending three days arguing with a plugin vendor.

First: confirm it’s DB wait, not PHP or network

  • Measure TTFB at the load balancer / reverse proxy and compare to upstream response time.
  • Check PHP-FPM saturation (active processes, slowlog).
  • If upstream is slow and PHP isn’t pegged, suspect DB waits.

Second: identify whether you’re I/O-bound, lock-bound, or CPU-bound

  • I/O-bound: high fsync latency, dirty page flushing, InnoDB waits.
  • Lock-bound: lock wait timeouts, many threads waiting, hot rows/tables (often wp_options or order meta).
  • CPU-bound: high CPU, heavy queries, poor indexes, big sorts/temp tables.

Third: fix the one thing that removes the queue

  • If I/O-bound and redo log is small: increase redo log capacity (the real fix).
  • If lock-bound: reduce lock contention (indexes, transaction scope, plugin behavior).
  • If CPU-bound: tune queries, indexes, and buffer pool; stop the query that’s doing a table scan at checkout time.

Then—and only then—consider durability tradeoffs or engine swaps.

Practical tasks: commands, outputs, decisions (12+)

Everything below is designed to be runnable on a typical Linux host with MySQL/MariaDB. Replace socket/credentials as needed. Each task includes: command, sample output, what it means, and what decision you make.

Task 1: Confirm server version and vendor (don’t guess)

cr0x@server:~$ mysql --version
mysql  Ver 8.0.36 for Linux on x86_64 (MySQL Community Server - GPL)

Meaning: You’re on MySQL 8.0. Redo log tuning uses innodb_redo_log_capacity.

Decision: Don’t follow MariaDB-specific advice (thread pool configs, different status tables) without checking equivalents.

Task 2: Verify the active durability settings (spot the “mask”)

cr0x@server:~$ mysql -Nse "SHOW VARIABLES WHERE Variable_name IN ('innodb_flush_log_at_trx_commit','sync_binlog');"
innodb_flush_log_at_trx_commit	1
sync_binlog	1

Meaning: Fully durable defaults. Good for money. Also means fsync costs are real and must be handled with proper I/O and redo sizing.

Decision: Keep this at 1 unless you have a written risk acceptance and tested crash behavior.

Task 3: Measure connection and concurrency pressure

cr0x@server:~$ mysql -Nse "SHOW GLOBAL STATUS LIKE 'Threads_running'; SHOW GLOBAL STATUS LIKE 'Threads_connected';"
Threads_running	18
Threads_connected	120

Meaning: 18 actively running threads is not insane, but 120 connected suggests connection churn or high web concurrency.

Decision: If Threads_running spikes during checkout bursts, correlate with lock waits and I/O waits next.

Task 4: Check InnoDB buffer pool sizing (are you reading from RAM?)

cr0x@server:~$ mysql -Nse "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"
innodb_buffer_pool_size	4294967296

Meaning: 4 GiB buffer pool. On a store with substantial traffic and big meta tables, this can be too small.

Decision: If host has RAM available, target 60–75% of system memory for dedicated DB hosts (less on shared hosts).

Task 5: Check redo log capacity / size (the “fix” lever)

cr0x@server:~$ mysql -Nse "SHOW VARIABLES LIKE 'innodb_redo_log_capacity';"
innodb_redo_log_capacity	104857600

Meaning: 100 MiB redo capacity. For bursty transactional workloads, that’s often comically small.

Decision: Increase to something sane (often 1–4 GiB depending on write rate and recovery tolerance). Validate disk space and recovery time expectations.

Task 6: Observe checkpoint and flushing pressure

cr0x@server:~$ mysql -e "SHOW ENGINE INNODB STATUS\G" | sed -n '1,120p'
*************************** 1. row ***************************
  Type: InnoDB
  Name:
Status:
=====================================
2025-12-29 09:17:21 0x7f3b3c0  INNODB MONITOR OUTPUT
=====================================
...
Log sequence number          112233445566
Log flushed up to            112233440000
Pages flushed up to          112233000000
Last checkpoint at           112232900000
...
Modified db pages            84210
...
Pending flushes (fsync) log: 0; buffer pool: 37

Meaning: Many modified pages and pending buffer pool flushes indicate the background flusher is busy. If this spikes during checkout, you’re I/O/redo constrained.

Decision: Increase redo capacity and ensure your storage can handle sustained writes; consider tuning innodb_io_capacity only after measuring disk.

Task 7: Check for lock waits and deadlocks during checkout spikes

cr0x@server:~$ mysql -Nse "SHOW GLOBAL STATUS LIKE 'Innodb_row_lock_time'; SHOW GLOBAL STATUS LIKE 'Innodb_row_lock_waits';"
Innodb_row_lock_time	98342
Innodb_row_lock_waits	217

Meaning: Lock waits exist. Not necessarily catastrophic, but if these jump during checkout, you have contention.

Decision: Identify hot tables/queries (next tasks). If lock waits correlate with wp_options autoload updates or stock updates, fix indexing/transaction scope.

Task 8: Find the top slow queries (use the slow log, not vibes)

cr0x@server:~$ mysql -Nse "SHOW VARIABLES LIKE 'slow_query_log'; SHOW VARIABLES LIKE 'long_query_time';"
slow_query_log	ON
long_query_time	1.000000

Meaning: Slow query log is on, threshold 1s. Good—checkout issues often have a few 2–5s queries hiding.

Decision: If it’s off, turn it on temporarily and capture a checkout window. Don’t run blind.

Task 9: Inspect disk latency and confirm the fsync reality

cr0x@server:~$ iostat -x 1 3
Linux 6.5.0 (server) 	12/29/2025 	_x86_64_	(8 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          12.50    0.00    6.25   18.75    0.00   62.50

Device            r/s     w/s   rkB/s   wkB/s  await  svctm  %util
nvme0n1         45.0   220.0  2200.0  9800.0   9.80   0.35  85.00

Meaning: await ~10ms with high utilization and iowait ~19%. For a busy transactional DB, 10ms average can already be painful. If it spikes higher during checkout bursts, you’ll feel it.

Decision: If await is high: fix storage (NVMe, RAID controller cache policy, filesystem, noisy neighbor) before blaming the SQL engine.

Task 10: Check for table bloat and missing indexes on WooCommerce hotspots

cr0x@server:~$ mysql -e "SELECT table_name, engine, table_rows, round((data_length+index_length)/1024/1024,1) AS MB FROM information_schema.tables WHERE table_schema=DATABASE() AND table_name IN ('wp_postmeta','wp_options','wp_woocommerce_order_items','wp_woocommerce_order_itemmeta') ORDER BY MB DESC;"
+------------------------------+--------+------------+--------+
| table_name                   | engine | table_rows | MB     |
+------------------------------+--------+------------+--------+
| wp_postmeta                  | InnoDB |    8200000 | 2150.4 |
| wp_options                   | InnoDB |      98000 |  210.7 |
| wp_woocommerce_order_itemmeta| InnoDB |    2400000 |  640.9 |
| wp_woocommerce_order_items   | InnoDB |     310000 |   96.2 |
+------------------------------+--------+------------+--------+

Meaning: Big meta tables. That’s normal. The question is whether the queries hitting them are indexed and whether wp_options has autoload bloat.

Decision: Plan index review and options cleanup. Big tables aren’t a crime; unindexed access is.

Task 11: Detect autoload bloat (a quiet checkout killer)

cr0x@server:~$ mysql -e "SELECT ROUND(SUM(LENGTH(option_value))/1024/1024,2) AS autoload_mb FROM wp_options WHERE autoload='yes';"
+-------------+
| autoload_mb |
+-------------+
|       18.74 |
+-------------+

Meaning: ~19 MiB autoloaded on every request. That’s heavy; it increases memory churn and query time, and can amplify lock contention if options are frequently updated.

Decision: Reduce autoload. Identify offenders; move transient caches out of autoload; audit plugins that dump giant arrays into options.

Task 12: Find which options are the offenders

cr0x@server:~$ mysql -e "SELECT option_name, ROUND(LENGTH(option_value)/1024,1) AS kb FROM wp_options WHERE autoload='yes' ORDER BY LENGTH(option_value) DESC LIMIT 10;"
+---------------------------+--------+
| option_name               | kb     |
+---------------------------+--------+
| plugin_x_cache_blob       | 2048.0 |
| woocommerce_sessions      | 1536.0 |
| theme_builder_settings    |  980.4 |
| plugin_y_rules            |  620.7 |
| rewrite_rules             |  410.2 |
| cron                      |  350.1 |
| widget_text               |  290.5 |
| wp_user_roles             |  210.3 |
| plugin_z_feed_state       |  185.9 |
| woocommerce_tax_rates     |  160.4 |
+---------------------------+--------+

Meaning: Some plugins are storing megabytes in autoload. That’s not “WordPress”; that’s someone using the options table as a dumpster.

Decision: Change autoload to ‘no’ where safe, or adjust plugin settings. Validate behavior in staging.

Task 13: Verify binary log format and durability for transactional safety

cr0x@server:~$ mysql -Nse "SHOW VARIABLES WHERE Variable_name IN ('log_bin','binlog_format','sync_binlog');"
log_bin	ON
binlog_format	ROW
sync_binlog	1

Meaning: Replication-friendly and durable. Row-based binlog reduces replication surprises for complex writes.

Decision: If someone suggests sync_binlog=0 to “fix checkout,” ask what their restore plan is when the binlog doesn’t contain “paid” orders.

Task 14: Identify top wait events (MySQL 8 performance schema)

cr0x@server:~$ mysql -e "SELECT event_name, COUNT_STAR, ROUND(SUM_TIMER_WAIT/1000000000000,2) AS seconds_waited FROM performance_schema.events_waits_summary_global_by_event_name WHERE event_name LIKE 'wait/io/file/innodb/%' ORDER BY SUM_TIMER_WAIT DESC LIMIT 8;"
+------------------------------------------+------------+----------------+
| event_name                               | COUNT_STAR | seconds_waited |
+------------------------------------------+------------+----------------+
| wait/io/file/innodb/innodb_redo          |   18422091 |          812.34|
| wait/io/file/innodb/innodb_data_file     |    3922011 |          244.12|
| wait/io/file/innodb/innodb_log_file      |    1200099 |          120.44|
| wait/io/file/innodb/innodb_temp_file     |      22011 |           11.02|
+------------------------------------------+------------+----------------+

Meaning: Redo waits dominate. That’s a flashing arrow toward redo sizing and disk latency.

Decision: Increase redo capacity and re-check. If redo waits persist, storage/fsync is the next suspect.

Task 15: Capture a real-time view during a checkout test

cr0x@server:~$ mysqladmin -i 1 -c 5 processlist status
Uptime: 123456  Threads: 125  Questions: 9933221  Slow queries: 18  Opens: 322  Flush tables: 1  Open tables: 256  Queries per second avg: 80.50
Id: 31142  User: wp   Host: 10.0.2.15:43218  db: shop  Command: Query  Time: 3  State: Waiting for handler commit  Info: UPDATE wp_postmeta SET meta_value='...'
Id: 31151  User: wp   Host: 10.0.2.15:43222  db: shop  Command: Query  Time: 3  State: Waiting for handler commit  Info: INSERT INTO wp_posts ...
Uptime: 123457  Threads: 128  Questions: 9933310  Slow queries: 18  Opens: 322  Flush tables: 1  Open tables: 256  Queries per second avg: 92.10

Meaning: “Waiting for handler commit” is a classic sign of commit/fsync pressure (not always, but often). If checkout requests hang here, you’re paying storage latency at commit time.

Decision: Don’t throw caches at this. Fix the write path: redo capacity, storage latency, and flush behavior.

Three corporate mini-stories from the checkout trenches

Mini-story 1: The incident caused by a wrong assumption

A mid-sized retail company migrated a WooCommerce stack from MySQL to MariaDB because “it’s a drop-in replacement and faster.” The migration plan was clean on paper: dump, import, switch endpoints, done. They did a quick browse test, ran a couple orders in staging, and shipped it on a quiet Tuesday.

Checkout looked fine for a day. Then a marketing email went out and the site hit a familiar traffic shape: lots of browsing, then a sharp burst of checkouts in a 10–15 minute window. Orders started timing out, but payment authorizations were still happening. Support tickets arrived: “I got charged but no confirmation.” You already know where this goes.

The wrong assumption wasn’t “MariaDB is bad.” It was “engine swap will fix latency.” The real issue was storage latency under fsync plus too-small redo logs. The new DB behaved differently enough that the stalls looked new, but the physics were the same. They were simply checkpointing constantly under pressure.

They stabilized by increasing redo log sizing appropriately and tuning I/O capacity to match the device. Then they fixed the operational gap: load tests that include checkout bursts, not just home page hits. Nobody enjoyed the week, but the store stopped charging people into the void.

Mini-story 2: The optimization that backfired

A SaaS agency running multiple WooCommerce stores per node had a standing complaint: “checkout is slow on peak.” Someone noticed innodb_flush_log_at_trx_commit was set to 1 and decided that was “too strict.” They changed it to 2 across the fleet during a maintenance window. Latency improved immediately. High fives all around.

Two months later, a host rebooted hard after a kernel panic. One store reported missing orders: customers had receipts from the payment provider, but the orders weren’t in WooCommerce. The team initially blamed the payment plugin. Then they found a gap: transactions that had been acknowledged at the application layer weren’t guaranteed to have their redo flushed at the moment of crash. Some writes simply never made it to stable storage.

They had traded performance for a small but real window of data loss. It’s not always catastrophic. It’s catastrophic exactly when it matters: when the server dies at the wrong millisecond. Which is what servers do, eventually, with impressive timing.

The fix wasn’t “never tune durability.” The fix was to stop using durability relaxation as a performance crutch. They reverted to 1, increased redo log capacity, and moved the worst stores to better storage. Checkout stayed fast enough, and “missing paid orders” stopped being a thing.

Mini-story 3: The boring but correct practice that saved the day

An enterprise team ran WooCommerce as a “side business” on a platform shared with other apps. Nothing exciting, which is exactly what you want. Their SRE lead insisted on two boring practices: keep the slow query log enabled (with sane rotation) and run a weekly “top 10 queries” review. No heroics, just hygiene.

One Friday, checkout latency crept up. Not a hard outage—just the kind of slow that kills conversion while nobody pages you. Because the slow log was already there, they didn’t have to “turn on logging during incident,” which is the operational version of learning to swim during a flood.

The slow log showed a new query pattern hammering wp_options with a full scan. A plugin update had introduced a query that ignored an index-friendly access pattern and ran during checkout. They rolled back the plugin, then added a targeted index where appropriate and opened a vendor ticket with evidence.

Checkout recovered within an hour. No database swap, no durability cheating, no adding servers. Just the dull competence of having data when you need it.

Common mistakes: symptom → root cause → fix

1) Symptom: Checkout p95 jumps during traffic bursts; CPU is fine

Root cause: InnoDB forced checkpointing due to tiny redo logs; disk flush pressure creates stalls.

Fix: Increase innodb_redo_log_capacity (MySQL 8) or innodb_log_file_size (MariaDB/older MySQL). Validate storage latency with iostat.

2) Symptom: “Waiting for handler commit” dominates processlist

Root cause: Commit/fsync latency (bad storage, saturated device, or too many small commits).

Fix: Improve storage, keep durable settings at 1, increase redo capacity; reduce transaction count where possible (plugin behavior).

3) Symptom: Lock wait timeouts around checkout

Root cause: Hot rows/tables (stock rows, options rows, session tables), long transactions, missing indexes.

Fix: Identify the blocking queries; add indexes; reduce transaction scope; fix plugins that hold transactions while doing remote calls.

4) Symptom: Swapping MySQL ↔ MariaDB “helps” but doesn’t solve it

Root cause: Engine swap changed default behavior but didn’t fix underlying I/O and schema issues.

Fix: Treat DB engine as a choice, not a cure. Fix redo sizing, buffer pool, storage, and hotspots first.

5) Symptom: Checkout slow only after installing “performance” plugin

Root cause: Plugin adds synchronous logging, excessive autoload options, or runs heavy queries on checkout hooks.

Fix: Disable plugin to confirm; move logging async; clean autoload bloat; profile queries during checkout.

6) Symptom: Replica lag spikes when checkouts spike

Root cause: Primary is I/O constrained; binlog flush settings; replica applying large transactions slowly.

Fix: Fix primary write path first; verify sync_binlog=1; ensure replica storage is comparable; monitor apply lag.

Checklists / step-by-step plan

Step-by-step: fix checkout lag without gambling your data

  1. Baseline latency: capture p50/p95 for checkout endpoint and DB commit time indicators.
  2. Confirm engine/version: MySQL 8 vs MariaDB changes the knobs and the dashboards.
  3. Measure storage latency: check iostat -x during a controlled checkout burst.
  4. Inspect redo sizing: if redo is <1 GiB on a real store, assume it’s undersized until proven otherwise.
  5. Increase redo capacity safely: MySQL 8 can do this more easily; plan maintenance where needed for MariaDB/older versions.
  6. Keep durability sane: leave innodb_flush_log_at_trx_commit=1 unless you have a written, tested risk model.
  7. Check buffer pool: ensure hot working set fits; avoid swapping.
  8. Find lock hotspots: use InnoDB status, lock waits, and processlist during checkout.
  9. Clean autoload bloat: reduce autoload MB; it’s low-hanging fruit with outsized impact.
  10. Index what checkout hits: especially postmeta and order meta access patterns used by plugins.
  11. Validate under burst load: don’t test with one order; test with 20–50 checkouts over a short window.
  12. Rollout carefully: one store/node at a time, watch metrics, then expand.

When you should choose MySQL vs MariaDB for WooCommerce

  • Choose MySQL 8 if you want strong built-in instrumentation, modern redo management, and broad vendor compatibility in managed platforms.
  • Choose MariaDB if your platform standardizes on it, you benefit from its thread pool/concurrency features, and your operational team knows its quirks well.
  • Avoid switching as a “performance fix” unless you can name the exact mechanism you expect to improve (and how you’ll measure it).

Joke #2: “Let’s just switch databases” is the adult version of turning the monitor off and on because the spreadsheet looks wrong.

One quote you should tattoo on your runbook (mentally)

Paraphrased idea from John Allspaw: reliability is a feature, and you don’t get it by accident.

FAQ

1) Is MariaDB faster than MySQL for WooCommerce?

Sometimes, in specific concurrency scenarios. But checkout lag is more often I/O and redo/checkpoint pressure than raw query speed. Pick based on operational fit and observability, not folklore.

2) What’s the single most common real fix for checkout lag?

Increase redo log capacity/size so InnoDB stops checkpointing itself to death during write bursts. Then confirm storage latency is reasonable.

3) Why not just set innodb_flush_log_at_trx_commit=2?

Because it’s a durability trade. It can lose recent committed transactions on crash. That’s not theoretical; it’s exactly what the setting means.

4) How big should redo logs be?

There isn’t one number. For many WooCommerce stores with real traffic, 1–4 GiB is a common starting range. You balance: fewer stalls vs longer crash recovery. Measure before and after.

5) If my CPU is low, why is checkout slow?

Because the DB can be waiting on I/O (fsync, data file writes) or locks. CPU graphs don’t show waiting well; performance schema and I/O metrics do.

6) Can autoloaded options really affect checkout?

Yes. Autoload bloat increases per-request work, memory churn, and can create contention if options are updated frequently. It’s a classic “death by convenience” pattern.

7) Should I put WooCommerce tables on MyISAM/Aria for speed?

No. Checkout is transactional. You want crash recovery and row-level locking. If you trade that away for a benchmark, production will collect the debt with interest.

8) Do I need a query cache to speed up WooCommerce?

No. MySQL 8 removed it. Use proper indexes, buffer pool sizing, and application-level caching where it makes sense (not on checkout).

9) What if increasing redo capacity doesn’t help?

Then your bottleneck is likely storage latency, lock contention, or a specific slow query path during checkout (often plugin-induced). Use wait event summaries and slow logs to pinpoint it.

10) Should I move to a managed database service?

If your current constraint is “we can’t operate storage and backups reliably,” yes—managed services can remove a lot of failure modes. But they won’t fix bad schema patterns or plugin behavior by magic.

Next steps you can do today

  1. Run the version and durability checks. Confirm you’re not already “masking” the issue.
  2. Measure disk latency during a small burst of test checkouts. If await is ugly, fix storage first.
  3. Check redo capacity/size. If it’s tiny, increase it thoughtfully and monitor checkpoint pressure.
  4. Measure lock waits and inspect autoload bloat in wp_options. Clean up the top offenders.
  5. Enable/verify slow query logging and capture a checkout window. Fix what’s actually slow, not what’s fashionable.

If you do only one thing: stop treating durability settings as performance tuning. Size the redo logs, verify the disk, and make checkout boring again.

← Previous
BIOS Update Roulette: The Bravest Click in Computing
Next →
Reduced Motion Support: prefers-reduced-motion Done Properly

Leave a comment