MySQL vs MariaDB Backup Wars on a VPS: mysqldump vs Physical Backups

Was this helpful?

You’re on a VPS. Your disk is “fast” in the way airport Wi‑Fi is “fast.” Backups run at 02:00, the site crawls at 02:05, and by 02:10 your monitoring is politely asking if you still work here.

The decision isn’t “logical vs physical.” It’s: how do you back up MySQL/MariaDB in a way that restores quickly, stays consistent, and doesn’t DDoS your own storage?

The real fight: restore time, consistency, and I/O

Most teams argue about backups like it’s a theological dispute: logical dumps are portable versus physical backups are fast. On a VPS, the fight is uglier and more practical:

  • Restore time objective (RTO): how long you can be down while you rebuild data.
  • Restore point objective (RPO): how much data you can lose (minutes? hours?).
  • Consistency: whether the backup can be restored without corruption or missing relations.
  • Impact: whether backup time becomes “planned outage” because I/O saturates.
  • Operational friction: whether anyone actually tests restores, because the process is too painful.

mysqldump wins on simplicity and portability. Physical backups win on speed and, when done correctly, consistency under load. But on VPS storage—often network-backed, bursty, and shared—physical backups can also be the quickest way to discover the ceiling of your IOPS allocation.

Here’s the rule I use in production: Pick the backup method you can restore quickly and repeatedly. Everything else is trivia.

Facts and history that matter more than you think

These aren’t fun facts for a pub quiz. Each one explains why a “simple” backup plan breaks at 03:00.

  1. MariaDB started as a fork of MySQL after Oracle acquired Sun (and therefore MySQL). That fork history still shows up in differing tooling and defaults.
  2. InnoDB became the default MySQL storage engine in MySQL 5.5. Many backup approaches assume InnoDB’s crash recovery and redo logs; MyISAM does not play the same game.
  3. Percona’s XtraBackup popularized hot physical backups for InnoDB. It shaped expectations: “a physical backup should be online.”
  4. MariaDB ships MariaBackup (mariabackup), derived from XtraBackup lineage, but version compatibility matters more than marketing names.
  5. MySQL 8 moved hard toward GTIDs and more structured replication. That changes how you think about point-in-time recovery and binlog retention.
  6. File-per-table (innodb_file_per_table) became common, reducing the “one huge ibdata1” problem—but also increasing filesystem metadata churn during backups.
  7. Redo log format and data dictionary changes made some cross-version physical restores painful. Logical dumps are slower, but often more forgiving across major versions.
  8. VPS “local SSD” is frequently networked under the hood. Your backup strategy must assume noisy neighbors and variable latency.

One paraphrased idea from Werner Vogels (reliability/operations): Everything fails, all the time; design systems that assume failure and recover fast. That’s the vibe for backups too.

Backup types on a VPS: logical, physical, and snapshots

Logical backups (mysqldump, mysqlpump)

Logical backups export schema and data as SQL (or sometimes delimited formats). Pros:

  • Portable across OS/filesystems and often across versions.
  • Selective: one database, one table, even filtered rows (with pain).
  • Easy to inspect with standard tools.

Cons:

  • Slow to restore at scale.
  • Produces large temporary I/O and CPU load; can crush a small VPS.
  • Consistency depends on engine and flags. Defaults are not your friend.

Physical backups (mariabackup, xtrabackup, raw copy)

Physical backups copy the actual database files plus enough metadata/logs to make them consistent. Pros:

  • Fast restore (file copy + prepare phase).
  • Better fit for large InnoDB datasets.
  • Can be hot/online with proper tooling.

Cons:

  • Version compatibility constraints can be strict.
  • Tooling can be finicky on “minimal” VPS distros.
  • Backups are less human-readable; verification is different.

Filesystem/block snapshots (LVM, ZFS)

Snapshots are not backups by themselves. They’re a capture mechanism. You still need to export snapshot data to somewhere else and have a restore plan.

On a VPS, you may or may not have LVM or ZFS. If you do, snapshots can reduce downtime (or avoid it) by taking a near-instant copy while MySQL is briefly locked or at least flushed.

MySQL vs MariaDB: backup tooling and sharp edges

At the “backup API” layer, MySQL and MariaDB look similar: both speak SQL, both have binary logs, both can dump schemas. The practical differences show up in tooling maturity, packaging, and how strict versions are with physical restores.

MariaDB: mariabackup is usually the straight answer

If you run MariaDB and your data is mostly InnoDB, mariabackup is typically the default physical backup choice. It’s designed to be the “official” hot backup tool and generally tracks MariaDB features.

MySQL: physical backups depend on your ecosystem

Oracle MySQL has an enterprise backup product, but in the real world on VPS fleets you’ll see:

  • mysqldump (everywhere, for better or worse)
  • Percona XtraBackup (common, especially for InnoDB-heavy workloads)
  • Snapshots where storage and ops maturity allow it

Cross-compatibility: stop assuming it’s “basically the same”

Logical dumps are generally more portable between MySQL and MariaDB than physical backups. Physical backups are married to on-disk formats, redo/undo implementations, and version-specific behavior. If you’re planning to migrate engines or major versions, dumps plus binlogs (or replication-based cutovers) are usually safer than attempting to “lift and shift” raw files.

mysqldump: what it’s good at, where it lies to you

mysqldump is the backup tool equivalent of duct tape: not pretty, not always appropriate, but it’s in every drawer. On a small VPS, it’s often the least-worst place to start because it’s predictable and doesn’t require exotic packages.

Use mysqldump when

  • Your dataset is small enough to restore within your RTO.
  • You need portability across versions or between MySQL and MariaDB.
  • You need selective backups (specific databases/tables).
  • You can accept higher CPU and I/O overhead during backup windows.

Beware these failure modes

  • Inconsistent backups if you don’t use the right flags for InnoDB.
  • Table locks (or metadata locks) that stall app traffic.
  • Gigantic single-thread restore times that turn incidents into long weekends.
  • Missing objects if you forget routines, triggers, events, or users.

Joke #1: A backup you never tested is called “optimism,” which is not a storage tier.

Dump consistency: the flags that matter

For InnoDB-heavy databases, the bare minimum is --single-transaction. But that alone isn’t a force field. Long-running transactions can bloat undo/redo and make your dump a slow-motion accident.

What I typically reach for on a VPS:

  • --single-transaction for InnoDB consistency without global locks.
  • --routines --triggers --events to capture the non-table bits.
  • --set-gtid-purged=OFF in many mixed environments to avoid GTID surprises (case-by-case).
  • --hex-blob to avoid encoding damage.
  • --master-data=2 (or at least binlog coordinates) if you want PITR.

Physical backups: mariabackup, xtrabackup, and raw file copies

Physical backups are about copying InnoDB pages plus the logs needed to make them consistent. The promise: restore is fast because you’re not replaying SQL; you’re restoring files and letting InnoDB do crash-recovery-style reconciliation.

mariabackup (MariaDB)

mariabackup can do hot backups of InnoDB and includes a “prepare” step to apply redo logs. It can also stream and compress, which matters on VPS networks and tiny disks.

xtrabackup (common with MySQL ecosystems)

XtraBackup is the workhorse for hot physical backups in many shops. The operational model is similar: backup → prepare → restore (copy-back) → fix permissions → start server.

Raw file copies: only with the right guardrails

Copying /var/lib/mysql with rsync while MySQL is running is how you manufacture corrupted backups. The only time raw copies make sense is when you can guarantee filesystem-level consistency (snapshot) or you stop MySQL cleanly.

Joke #2: “I’ll just rsync the datadir live” is the database equivalent of “I’ll just juggle knives to save time.”

VPS-specific pain: I/O bursts and noisy neighbors

Physical backups can be extremely I/O heavy. That’s not inherently bad—until your VPS storage throttles you. The result looks like a database problem but is actually a platform constraint: high iowait, stalled queries, replication lag, and backups that take longer the more you “optimize” them.

On VPSes, plan for throttling: limit backup parallelism, compress carefully (CPU vs I/O trade), and run with ionice/nice when appropriate.

Binary logs: the difference between “backup” and “recovery”

A nightly full backup is not a recovery strategy if your RPO is less than 24 hours. Binary logs (binlogs) are the simplest way to bridge that gap: you restore the last full backup, then replay binlogs up to a timestamp/position/GTID.

On a VPS, binlogs are also how you avoid doing full dumps too often. Full backups are heavy; incremental recovery via binlogs is cheap in comparison.

Practical guidance

  • Enable binlogs if you care about PITR.
  • Size retention based on how long it takes you to notice bad data and react.
  • Back up binlogs off-host. Local retention is not a backup; it’s a convenience.
  • Practice one PITR restore. It is always weirder than you expect.

Practical tasks: 14 commands, outputs, and decisions

These are the checks I actually run on VPS systems. Each task includes: command, example output, what the output means, and what decision you make next.

Task 1: Confirm which server you’re actually running

cr0x@server:~$ mysql -e "SELECT VERSION() AS version, @@version_comment AS comment\G"
*************************** 1. row ***************************
version: 10.11.6-MariaDB-0+deb12u1
comment: Debian 12

Meaning: Version and distro packaging. This matters for physical backup compatibility and default settings.

Decision: If MariaDB, prefer mariabackup. If MySQL 8, check whether you’ll use mysqldump, snapshots, or XtraBackup.

Task 2: Check data size and file layout

cr0x@server:~$ sudo du -sh /var/lib/mysql
18G	/var/lib/mysql

Meaning: Rough size of your restore footprint.

Decision: If this is >10–20 GB on a small VPS, dumps will restore slowly. Start planning physical backups or at least faster restore workflows (parallel import, larger instance for restore, etc.).

Task 3: Identify storage engine mix (InnoDB vs MyISAM)

cr0x@server:~$ mysql -e "SELECT ENGINE, COUNT(*) tables FROM information_schema.tables WHERE table_schema NOT IN ('mysql','information_schema','performance_schema','sys') GROUP BY ENGINE;"
ENGINE		tables
InnoDB		312
MyISAM		4

Meaning: --single-transaction protects InnoDB consistency; MyISAM may still require locks for consistent dumps.

Decision: If you have MyISAM tables you care about, either accept table locks during dump or migrate them to InnoDB.

Task 4: Verify binlog and GTID settings

cr0x@server:~$ mysql -e "SHOW VARIABLES LIKE 'log_bin'; SHOW VARIABLES LIKE 'gtid_mode'; SHOW VARIABLES LIKE 'server_id';"
Variable_name	Value
log_bin		ON
Variable_name	Value
gtid_mode	OFF
Variable_name	Value
server_id	1

Meaning: Binlogs enabled, GTIDs off (common in MariaDB or older MySQL setups), server_id set.

Decision: If log_bin=OFF and you need PITR, turn it on (with disk planning). If GTIDs are on, tune dump flags and recovery steps accordingly.

Task 5: Get current binlog position for PITR anchoring

cr0x@server:~$ mysql -e "SHOW MASTER STATUS\G"
*************************** 1. row ***************************
File: binlog.000214
Position: 90123318
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:

Meaning: This is your “starting line” for replay if you take a snapshot or physical backup right now.

Decision: Record this with the backup metadata. If you can’t tie backups to binlog coordinates (or GTID set), PITR becomes guesswork.

Task 6: Measure I/O pressure during backup windows

cr0x@server:~$ iostat -xz 1 3
Linux 6.1.0 (server) 	12/31/2025 	_x86_64_	(2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           6.21    0.00    2.33   41.18    0.72   49.56

Device            r/s     w/s   rkB/s   wkB/s  aqu-sz  await  %util
vda              9.00  210.00  512.0  6144.0   12.40  58.30  99.80

Meaning: 40%+ iowait, 99% disk utilization, and high await. This is storage saturation, not “slow SQL.”

Decision: Throttle backups, move backup target off-host, schedule differently, or upgrade disk/I/O class. Also consider physical backups with controlled parallelism vs huge logical dump writes.

Task 7: See if backups are blocked on metadata locks

cr0x@server:~$ mysql -e "SHOW PROCESSLIST;"
Id	User	Host	db	Command	Time	State	Info
42	root	localhost	NULL	Query	12	Waiting for table metadata lock	FLUSH TABLES WITH READ LOCK
87	app	10.0.2.10:51234	prod	Query	12	Sending data	SELECT ...

Meaning: A locking backup step is waiting, likely blocked by DDL or long query.

Decision: Avoid FTWRL for hot paths. Prefer --single-transaction for dumps and hot physical backups for InnoDB. If you must lock, schedule a DDL freeze window.

Task 8: Run a safe-ish mysqldump with minimal foot-guns

cr0x@server:~$ mysqldump --single-transaction --routines --triggers --events --hex-blob --master-data=2 --databases prod | gzip -1 > /backups/prod.sql.gz
cr0x@server:~$ ls -lh /backups/prod.sql.gz
-rw-r--r-- 1 root root 1.9G Dec 31 02:14 /backups/prod.sql.gz

Meaning: Backup created. Size gives you a hint about restore time and compression ratio. --master-data=2 embeds binlog coordinates as a comment.

Decision: If this takes too long or clobbers performance, move to physical backups and/or run dumps from a replica.

Task 9: Validate that the dump includes what you think it includes

cr0x@server:~$ zgrep -m1 -E "MASTER_LOG_FILE|CHANGE MASTER" /backups/prod.sql.gz
-- CHANGE MASTER TO MASTER_LOG_FILE='binlog.000214', MASTER_LOG_POS=90123318;

Meaning: Binlog coordinates are recorded.

Decision: If missing, you can’t do clean PITR from this dump. Fix flags and rerun.

Task 10: Create a physical backup with mariabackup (MariaDB)

cr0x@server:~$ sudo mariabackup --backup --target-dir=/backups/mariabackup/full --user=backup --password='***'
[00] 2025-12-31 02:10:11 Backup created in directory '/backups/mariabackup/full'
[00] 2025-12-31 02:10:11 Writing backup-my.cnf
[00] 2025-12-31 02:10:11 Completed OK!

Meaning: Raw files plus metadata captured.

Decision: Immediately run prepare. An unprepared physical backup is not ready to restore.

Task 11: Prepare the physical backup (apply redo logs)

cr0x@server:~$ sudo mariabackup --prepare --target-dir=/backups/mariabackup/full
[00] 2025-12-31 02:13:52 Starting to apply redo log
[00] 2025-12-31 02:14:20 Redo log applied
[00] 2025-12-31 02:14:20 Completed OK!

Meaning: Backup is now consistent and ready for copy-back.

Decision: If prepare is slow, you’re I/O bound; consider faster disks for backup staging or offloading prepare to a restore host.

Task 12: Inspect InnoDB health and checkpointing pressure

cr0x@server:~$ mysql -e "SHOW ENGINE INNODB STATUS\G" | sed -n '1,60p'
=====================================
2025-12-31 02:15:02 INNODB MONITOR OUTPUT
=====================================
...
Log sequence number          128392001928
Log flushed up to            128392001928
Last checkpoint at           128391771200
...

Meaning: If checkpoint lags badly, backups and heavy writes can create recovery/flush pressure.

Decision: If you see chronic checkpoint lag, tune redo log size (engine/version-specific), buffer pool, and flush behavior; also avoid running backups at peak write load.

Task 13: Check for replication lag (if backing up a replica)

cr0x@server:~$ mysql -e "SHOW SLAVE STATUS\G" | egrep "Seconds_Behind_Master|Slave_IO_Running|Slave_SQL_Running"
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Seconds_Behind_Master: 57

Meaning: Replica is lagging by ~1 minute.

Decision: If lag is acceptable within your RPO, taking backups from the replica reduces production load. If lag spikes during backup, you’re saturating I/O or CPU; throttle.

Task 14: Dry-run a restore on the same box (sanity check)

cr0x@server:~$ zcat /backups/prod.sql.gz | head -n 20
-- MySQL dump 10.13  Distrib 8.0.36, for Linux (x86_64)
--
-- Host: localhost    Database: prod
-- ------------------------------------------------------
-- Server version	10.11.6-MariaDB-0+deb12u1
...

Meaning: You’re verifying you can read the backup and it contains expected headers. This is not a full test, but it catches “empty file” and “wrong target” mistakes.

Decision: Schedule a real restore test to a disposable instance. If you never do, you’ll learn during an incident when adrenaline makes you bad at typing.

Fast diagnosis playbook: find the bottleneck in minutes

When backups are slow or the app falls over during backups, don’t start by changing backup tools. Start by figuring out what resource you’re exhausting.

First: prove whether it’s storage saturation

  • Check iowait and disk await (iostat -xz). High await and %util near 100% means you’re I/O bound.
  • Check filesystem space and inode pressure (df -h, df -i).
  • Check whether the VPS is throttling IOPS (symptom: periodic stalls, sawtooth throughput).

What you do next: throttle compression/parallelism, move backup destination off the primary disk, or upgrade the VPS storage class. If you can’t, use a replica for backups.

Second: check locks and long transactions

  • SHOW PROCESSLIST for metadata locks.
  • Engine status for long history list / purge lag (InnoDB).
  • Look for DDL running during backups.

What you do next: schedule backups away from migrations, enforce DDL windows, and avoid FTWRL-based approaches unless you can tolerate stalls.

Third: check CPU contention (compression and single-thread dumps)

  • Check top or pidstat for mysqldump and gzip pegging CPU.
  • On tiny VPSes, compression level is a production setting.

What you do next: drop compression level, compress off-host, or switch to streaming physical backups with controlled CPU usage.

Fourth: check network (off-host backups and streaming)

  • If you stream backups to object storage or another node, confirm throughput is stable.
  • Packet loss and retransmits turn “streaming backup” into “accidental rate limiter.”

Common mistakes: symptom → root cause → fix

1) Backup succeeds, restore fails with missing routines/triggers

Symptom: App errors after restore: missing stored procedures, triggers not firing, scheduled events gone.

Root cause: Dump executed without --routines --triggers --events.

Fix: Add the flags, rerun backups, and test restore. Also include mysql system objects if you rely on users/privileges.

2) mysqldump causes production stalls

Symptom: Queries hang; you see “Waiting for table metadata lock.”

Root cause: DDL during dump, or dump options that force locks (or MyISAM tables).

Fix: Enforce migration windows, convert MyISAM, use --single-transaction, and consider taking backups from a replica.

3) Physical backup restores but InnoDB won’t start

Symptom: MySQL/MariaDB fails to start after copy-back; InnoDB complains about log files or data dictionary.

Root cause: Version mismatch, missing prepare step, wrong permissions, or mixing files from different servers.

Fix: Ensure backup tool version matches server major version expectations, run prepare, use correct ownership (mysql:mysql), and restore entire datadir consistently.

4) Backups are “fast” but replication lag explodes

Symptom: Seconds_Behind_Master grows during backup window.

Root cause: Backup I/O competes with replication apply; VPS disk hits IOPS cap.

Fix: Throttle backup, change schedule, move backups to another disk/volume, or upgrade I/O class. If possible, put relay logs on separate storage.

5) You can restore, but it takes forever and times out applications

Symptom: Dump restore runs for hours; app timeouts follow even after DB is “up.”

Root cause: Logical restore is single-threaded-ish, indexes rebuild, buffer pool cold, and you’re on a small VPS.

Fix: Prefer physical restores for large datasets; otherwise restore onto a bigger temporary instance, warm caches, and only then cut over.

6) “We have backups,” but they’re on the same VPS

Symptom: VPS disk dies; backups die with it.

Root cause: Local-only backup storage.

Fix: Off-host copies. Different failure domain. Always.

7) Snapshot backups restore inconsistent data

Symptom: Restored DB has corruption or missing recent commits.

Root cause: Snapshot taken without flushing/locking appropriately; you captured a crash-in-progress view without redo coverage.

Fix: Pair snapshots with MySQL coordination (flush, lock strategy appropriate to engine), or use hot physical backup tools that include redo/prepare steps.

Three corporate mini-stories from the backup trenches

Mini-story 1: The incident caused by a wrong assumption

They ran a mid-sized SaaS on a single “beefy” VPS. Backups were a nightly mysqldump to a second disk mounted at /backups. The team felt responsible. They even had retention: seven days. That’s the part that always sounds reassuring in meetings.

Then the VPS host had a storage event. The instance rebooted, came back, and MySQL wouldn’t start. Their response was calm: “No problem, restore from last night.” Except the backup disk was the same virtual block device group on the same host storage. It was attached, but the filesystem was toast. The backup files existed in the same way a demolished building “exists.”

The wrong assumption was subtle: they believed a second mounted volume was a second failure domain. On that provider, it wasn’t. It was just another slice of the same underlying pool. The postmortem wasn’t dramatic. It was worse: quietly humiliating and completely avoidable.

The fix was boring: backups were pushed off-host after creation, and restores were rehearsed monthly to a disposable instance. They also started recording where data lived in provider terms—region, pool, and attachment type—because “volume” can mean anything in VPS-land.

Mini-story 2: The optimization that backfired

A different company wanted faster backups. They moved from dumps to physical backups with streaming compression. The engineer doing it was competent and aggressive, the best kind until it’s not. They enabled parallel backup threads and cranked compression because storage was “expensive.” On paper, it looked great: smaller backups, faster transfer, less money.

In production, the VPS had two vCPUs. Compression started eating CPU like it was owed money. MySQL started contending on CPU, then on I/O because the backup also read aggressively from the datadir. Latency went up. Replication lag grew. Their queue workers timed out. Incident declared.

The painful lesson: “faster backup” can mean “faster at consuming your only resources.” The optimization was locally rational and globally disastrous. On a VPS, there is no “extra CPU” hiding behind a curtain.

The fix was measured: they reduced parallelism, lowered compression level, and ran backups from a replica with slightly larger CPU. They also introduced I/O throttling and stopped trying to make backups free. Backups cost resources. You decide where to pay: during backup windows, during restores, or during incidents.

Mini-story 3: The boring but correct practice that saved the day

One team did something unfashionable: quarterly full restore drills. Not a tabletop exercise. An actual restore to a clean VM, application pointed at it, basic integrity queries run, and a human signing off. Nobody bragged about it because it’s not a shiny architecture diagram.

Then an application deploy introduced a destructive migration. It dropped a column, and the ORM happily wrote nulls into the universe. They noticed within an hour. The question wasn’t whether they had backups—it was how quickly they could rewind without losing the legitimate writes after the deploy.

They restored last night’s full backup to a scratch host, replayed binlogs up to five minutes before the deploy timestamp, and exported only the affected tables back into production. It was messy, but controlled. The team knew the steps because they’d practiced the muscle memory when it wasn’t on fire.

They still wrote a postmortem and improved migration reviews. But the real reason it didn’t become a multi-day disaster was that their backup process was rehearsed, documented, and boring. In operations, boring is a feature.

Checklists / step-by-step plan

Step-by-step: choose your backup approach on a VPS

  1. Measure dataset size (datadir and logical size). If restore from SQL would exceed RTO, prioritize physical.
  2. Confirm engine mix. If mostly InnoDB, hot physical tools are viable; if significant MyISAM, plan for locks or migration.
  3. Decide your RPO. If you need <24h data loss, enable and retain binlogs off-host.
  4. Pick a primary full backup:
    • Small DB: mysqldump + binlogs is fine.
    • Medium/large DB: mariabackup/xtrabackup + binlogs.
  5. Pick a capture mechanism:
    • Tool-based hot backup (recommended for InnoDB).
    • Snapshot + coordination (only if you can do it correctly).
  6. Pick off-host storage. Different failure domain. Encrypt if needed.
  7. Write a restore runbook with commands, expected timings, and validation queries.
  8. Schedule restore drills at least quarterly. Monthly if your DB is business-critical.

Operational checklist: every backup job must do these

  • Write backup metadata (server version, binlog file/pos or GTID set, timestamp, tool version).
  • Verify file integrity (checksum) after transfer.
  • Alert on failures and on unusual duration/size changes.
  • Keep retention aligned to detection time (how long until you notice corruption or bad deletes).
  • Prove restore viability via periodic drills.

Restore validation checklist (minimum)

  • Server starts cleanly; InnoDB recovery completes.
  • Row counts match for key tables (approximate is fine as a first pass).
  • Critical indexes exist; migrations state is correct.
  • Application can authenticate and run its top 3 workflows.
  • Replication (if used) can be re-established from restored state.

FAQ

1) Is mysqldump “good enough” for production?

Yes, if your database is small, you can tolerate restore time, and you run it with the right flags. No, if restore time is measured in hours and your business measures downtime in minutes.

2) What’s the single most important mysqldump flag for InnoDB?

--single-transaction. It gives you a consistent view without locking tables (for InnoDB). It does not solve MyISAM consistency and it can still stress undo/redo if transactions run long.

3) Can I back up by copying /var/lib/mysql while MySQL runs?

Not safely, unless you’re using a coordinated snapshot method that guarantees consistency, or you stop MySQL. Live rsync of the datadir is a classic way to get backups that restore into sadness.

4) Should I compress backups on the VPS?

Sometimes. Compression trades CPU for I/O and network. On small VPSes, use low compression levels and measure. If CPU contention causes latency spikes, compress off-host or compress less.

5) Do physical backups replace binlogs?

No. Physical backups give you a fast full restore point. Binlogs give you point-in-time recovery between full backups. If you need low RPO, you want both.

6) Are physical backups portable between MySQL and MariaDB?

Generally no. Physical backups are tied to on-disk formats and versions. Logical dumps are the portable option for cross-engine moves, even though they’re slower.

7) Should I take backups from a replica?

Yes if you can. It reduces load on primary and gives you a safer place to run heavy reads. But watch replication lag: a laggy replica can quietly violate your RPO.

8) How often should I test restores?

At least quarterly for “important” databases, monthly for business-critical ones, and after major version upgrades or backup tooling changes. Testing restores is how you turn backup files into a recovery capability.

9) What’s the fastest restore method on a VPS?

Physical restore (prepared backup + copy-back) is typically fastest for InnoDB-heavy datasets. The real limiting factor is usually disk throughput on restore, not the tool’s elegance.

10) How do I know if my bottleneck is CPU, disk, or locks?

Disk: high iowait/await/%util in iostat. CPU: backup/compression processes pegging cores in top. Locks: metadata lock waits in SHOW PROCESSLIST.

Next steps: what to do this week

  • Decide RTO/RPO in writing. If you can’t define them, you can’t choose between dump and physical backups rationally.
  • Turn on binlogs (if you need PITR) and plan retention off-host.
  • Run Task 2 and Task 6 on your VPS during a backup window. If you’re saturating I/O, stop guessing and start throttling or moving backups off the primary disk.
  • If MariaDB and dataset is medium/large: implement mariabackup + prepare + off-host copy, with metadata capturing binlog coordinates.
  • If you stay on mysqldump: standardize flags, capture binlog coordinates, and do one full restore drill to a clean VM.
  • Schedule a restore test and make it someone’s recurring responsibility, not a heroic act.

Backups are not about the backup. They’re about the restore. Build the restore you want, then back up in whatever way makes that restore boring.

← Previous
WordPress PHP Version Incompatibility: Check and Upgrade Without Downtime
Next →
Polylang/WPML translation issues: why languages mix and how to fix

Leave a comment