You rented a tiny VPS because you’re being responsible. Then you installed MariaDB because you’re being ambitious. Now your “simple app” has a database daemon, a buffer pool, an auth system, connection storms, and a backup story you haven’t tested. Meanwhile your workload is: “a few users, a cron job, and one web process that mostly reads.”
This is the moment to admit a boring truth: for a large class of tiny VPS projects, MariaDB isn’t a database choice. It’s an operations choice. SQLite is often the better default—until it isn’t.
The decision you’re actually making
People frame this as “SQLite vs MariaDB.” That’s a lie of omission. The real decision is:
- Embedded database file (SQLite): fewer moving parts, fewer background threads, fewer knobs, fewer failure modes. You trade away some concurrency patterns and some operational tools.
- Client-server database service (MariaDB): network protocol, authentication, connection pooling, background maintenance, replication options, and a lot of things you can tune incorrectly at 2 a.m.
On a tiny VPS (1–2 vCPU, 1–4 GB RAM, modest SSD), the limiting factor is rarely “SQL feature set.” It’s typically one of:
- RAM pressure and swapping (the silent performance assassin).
- I/O latency spikes (noisy neighbors, burst credits, cheap storage).
- Too many connections or threads for the CPU.
- Backups that exist only in your imagination.
- A concurrency pattern that looks fine in dev and falls apart under a little real-world contention.
If your app is single-node, low to moderate write volume, and you’d rather ship product than babysit a daemon, SQLite deserves first consideration. If you need many concurrent writers, remote access from multiple machines, or online schema changes without downtime, MariaDB starts to look less like overkill and more like adult supervision.
Paraphrased idea from Werner Vogels (engineering/reliability mindset): “Everything fails, so design expecting failure.” Build your data layer around that, not around vibes.
Joke #1: Running MariaDB for a tiny hobby app is like hiring a full-time accountant to manage your pocket change—impressive, but the paperwork will win.
Facts and history that matter in production
Some background isn’t trivia; it explains why these systems behave the way they do.
- SQLite is embedded by design. It’s a library linked into your process, not a daemon. That’s why “connection” cost is microscopic and why file permissions suddenly matter a lot.
- SQLite has been in wide production use since 2000. It’s not “toy tech.” It’s used in browsers, phones, and countless embedded systems because it’s stable and boring.
- SQLite’s WAL mode was a turning point. Write-Ahead Logging improved concurrency for many read-heavy workloads by allowing readers during writes (with constraints).
- MySQL came first; MariaDB is a fork. MariaDB split from MySQL after Oracle’s acquisition of Sun (and with it, MySQL). That fork history matters when you’re reading old “MySQL” advice and assuming it applies.
- InnoDB became the default storage engine. Transactional semantics, crash recovery, and row-level locking are why it dominates. It’s also why memory sizing and fsync behavior matter.
- SQLite’s locking model is file-centric. That single file is both the magic and the constraint. It’s simple, until your workload has many writers or long transactions.
- MariaDB has thread pools and multiple execution paths. Tuning thread handling can make or break performance on small CPUs; default settings can be fine or terrible depending on connection behavior.
- SQLite’s “typed” columns are more flexible than they look. Type affinity is not the same as strict typing, which can be a feature or a footgun depending on your data hygiene.
- MariaDB brings mature tooling. Slow query logs, performance schema alternatives, and replication options are real advantages when you need observability and scale-out patterns.
Workload shapes: who wins where
Case A: single VPS, one app instance, mostly reads
Pick SQLite unless you have a specific reason not to. You get:
- No network hop.
- No connection pool required (though you still need sane application-level usage).
- A single file you can back up, checksum, and ship.
- Lower RAM footprint; no buffer pool you must size.
With WAL mode and short transactions, SQLite can handle a surprising amount of traffic. The trap is long-lived transactions and “write for every request” habits.
Case B: many concurrent writers (queue, chatty API, metrics-ish writes)
MariaDB starts winning because it’s designed to schedule concurrent write workloads with row-level locks and internal concurrency controls. SQLite can do it, but you’ll spend more time wrestling lock contention than enjoying your life.
Case C: multiple app servers or a separate job runner
If multiple machines must write to the same database, SQLite is usually the wrong tool. Yes, you can put SQLite on network storage. No, you generally should not. The moment you introduce network filesystems into a “tiny VPS” architecture, you’ve built a failure simulator.
Case D: you need online migrations and operational knobs
MariaDB gives you more levers: online DDL in many cases, better introspection at runtime, and established migration/replication patterns. SQLite can do migrations, but you don’t get the same operational flexibility when you’re in the weeds.
Case E: data is small and precious
SQLite’s single file is appealing for “precious small” data: configuration state, billing tokens, feature flags, audit logs for a small app, caches you care about. Your backup and restore story can be extremely straightforward—if you do it correctly.
When MariaDB is overkill (and why it hurts)
MariaDB is not “heavy” in the way enterprise people mean heavy. It’s heavy in the way small servers feel: a background service with a memory appetite and a lot of concurrent threads that don’t scale down politely.
The hidden tax: memory and swap
On tiny VPSes, the most common MariaDB failure isn’t “wrong index.” It’s swap thrash. When MariaDB + app + OS page cache exceed RAM, the system will degrade into a slow-motion collapse. Latency goes nonlinear. Requests queue. Timeouts trigger retries. The load increases. You get to watch it burn.
SQLite doesn’t magically avoid memory issues, but its baseline footprint is smaller and it doesn’t run a daemon that wants to be helpful by caching the universe.
Connections, threads, and the “it’s just a VPS” fallacy
Small apps often use a default ORM pool size of 10–50 per process because that’s what the blog post said. On a tiny VPS, that can mean:
- Too many DB threads
- Context switching overhead
- Lock contention amplified by concurrency
- Memory overhead per connection
SQLite mostly sidesteps “connection storms” because connections are in-process and cheap. You can still shoot yourself with concurrency, but it’s harder to accidentally spawn 400 TCP connections to yourself.
Operational surface area
MariaDB brings privileges, network exposure, TLS, user management, data directory ownership, log rotation, upgrades with compatibility, and background tasks. None of those are evil. But for a tiny project, each one is a potential time sink.
The overkill smell test
MariaDB is likely overkill if most of these are true:
- Single VPS, single app instance.
- Writes are occasional (seconds to minutes between writes, or small batches).
- Dataset fits comfortably in OS page cache (or is just small).
- You don’t need remote access from multiple hosts.
- You don’t need high write concurrency.
- You prefer dead-simple backups and restores.
SQLite: the sharp edges you must respect
Concurrency is about transactions, not wishful thinking
SQLite can handle many readers and one writer at a time. WAL mode improves the read/write coexistence, but it doesn’t make “many writers” free. The worst pattern is long write transactions or any transaction that holds locks while you do network I/O or complex application work.
Design rule: keep transactions short, do your work outside the transaction, then write.
WAL mode is not optional for most web workloads
If you’re building a web app and you stick with the default rollback journal mode, you’re voluntarily choosing more locking pain. WAL is usually the right call. It also changes backup semantics: you must account for WAL files.
Durability settings are real tradeoffs
SQLite makes it easy to set pragmas like synchronous=NORMAL and feel like a performance wizard. You are also trading away some durability guarantees. If your VPS can lose power or your hypervisor can reboot ungracefully (it can), you should know what you’re signing.
Network filesystems are a common trap
SQLite expects sane POSIX filesystem semantics. On many network filesystems, locking and fsync behavior are… interpretive dance. Local SSD on the VPS: good. NFS/SMB/”some shared thing”: usually no.
One file means one set of permissions
The database is a file. That’s wonderfully simple and also brutally literal. If your deployment switches users, containers, or working directories, you can break your app with a chmod.
MariaDB: the sharp edges you will meet
Default configs are not “safe for tiny”
MariaDB defaults are often sane for general-purpose servers, but “general-purpose” assumes more resources than your $5 VPS. The buffer pool, per-connection buffers, and background threads can exceed what you can afford.
Per-connection memory is the silent RAM leak
It’s not a leak, it’s worse: it’s “working as designed.” Each connection may allocate buffers for sort, join, temp tables, etc. The result: connection storms turn into memory storms. You see swap, then timeouts, then panic.
fsync and durability: performance depends on storage reality
InnoDB durability depends on flushing logs. On cheap VPS storage, fsync latency can spike. You’ll see periodic stalls and your app will blame “database slowness” as if it’s a personality flaw.
Backups are a process, not a file
With SQLite, backup can be a file-level operation (done correctly). With MariaDB, consistent backups require tooling and thought: logical dumps, physical copies, lock behavior, replication positioning if you go that route. None of this is hard, but it’s easy to not do.
Security footprint
SQLite has no listening port. MariaDB does. If you expose it to the Internet without meaning to, you’ll learn how popular password spraying is. “But I used a strong password” is not a strategy; it’s a hope.
Joke #2: Opening port 3306 to the Internet is a great way to meet bots that never sleep—unlike your on-call rotation.
Practical tasks: commands, outputs, and decisions
You don’t choose databases by reading feature matrices. You choose them by asking the machine what it’s doing. Below are concrete tasks I run on tiny VPSes. Each includes the command, example output, what it means, and the decision you make.
Task 1: Is the VPS swapping?
cr0x@server:~$ free -h
total used free shared buff/cache available
Mem: 1.9Gi 1.6Gi 120Mi 32Mi 230Mi 170Mi
Swap: 1.0Gi 780Mi 244Mi
What it means: Swap is heavily used. On a tiny VPS, this correlates with random multi-second stalls.
Decision: If you’re on MariaDB, reduce memory footprint (buffer pool, connection count) or move to SQLite if workload fits. If you’re on SQLite, reduce app memory or add RAM; SQLite isn’t the cause, but it can still suffer.
Task 2: Are we in I/O wait hell?
cr0x@server:~$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 1 798720 122880 11264 215040 0 5 120 980 520 840 18 7 55 20 0
1 2 798720 118432 11264 213120 0 12 210 1460 600 920 12 6 48 34 0
What it means: High wa (I/O wait) and swapping out (so) indicate storage latency and memory pressure.
Decision: Fix storage and RAM first. Don’t tune SQL while the host is drowning.
Task 3: Is MariaDB the top memory hog?
cr0x@server:~$ ps -eo pid,comm,rss,pcpu --sort=-rss | head
2481 mariadbd 612340 18.2
3022 python3 248120 9.1
1102 nginx 45200 0.3
What it means: MariaDB RSS is ~600MB; on 2GB this might be fine, on 1GB it’s not.
Decision: If your dataset is small and write concurrency low, consider SQLite to reclaim RAM. Otherwise tune InnoDB buffer pool and connection usage.
Task 4: How many connections are we actually using (MariaDB)?
cr0x@server:~$ sudo mariadb -e "SHOW GLOBAL STATUS LIKE 'Threads_connected'; SHOW VARIABLES LIKE 'max_connections';"
Variable_name Value
Threads_connected 87
Variable_name Value
max_connections 151
What it means: You’re hovering near the ceiling. Each connection has overhead.
Decision: Fix the app pool sizing, add a pooler, or reduce concurrency. Don’t just raise max_connections on a tiny VPS unless you also budget RAM.
Task 5: Is the slow query log telling the truth (MariaDB)?
cr0x@server:~$ sudo mariadb -e "SHOW VARIABLES LIKE 'slow_query_log'; SHOW VARIABLES LIKE 'long_query_time';"
Variable_name Value
slow_query_log OFF
Variable_name Value
long_query_time 10.000000
What it means: Logging is off and threshold is too high. You’re blind.
Decision: Enable slow query log temporarily with a low threshold (like 0.2–0.5s) to catch the real offenders. Then turn it back down to avoid log spam.
Task 6: Are we CPU-bound or I/O-bound (quick look)?
cr0x@server:~$ top -b -n 1 | head -15
top - 12:00:11 up 21 days, 2:14, 1 user, load average: 3.12, 2.44, 1.98
Tasks: 132 total, 2 running, 130 sleeping, 0 stopped, 0 zombie
%Cpu(s): 72.0 us, 8.0 sy, 0.0 ni, 12.0 id, 8.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 2048.0 total, 140.0 free, 1550.0 used, 358.0 buff/cache
What it means: High user CPU suggests query execution or app logic; non-trivial I/O wait suggests storage latency too.
Decision: If CPU is high with few queries, check query plans and indexes. If I/O wait spikes correlate with commits, check fsync behavior and disk health.
Task 7: SQLite quick health check (integrity)
cr0x@server:~$ sqlite3 /var/lib/myapp/app.db "PRAGMA integrity_check;"
ok
What it means: Database file is structurally consistent.
Decision: If not “ok,” stop writes, take a copy, and plan recovery. Corruption is rare but not imaginary—especially with unsafe storage or hard kills.
Task 8: SQLite journal mode and synchronous settings
cr0x@server:~$ sqlite3 /var/lib/myapp/app.db "PRAGMA journal_mode; PRAGMA synchronous;"
wal
2
What it means: WAL mode is enabled; synchronous=2 means FULL (durable, slower).
Decision: For many VPS apps, WAL is a must. Keep synchronous FULL if data correctness matters. If you choose NORMAL, do it knowingly and document the risk.
Task 9: SQLite lock contention symptoms (busy timeouts)
cr0x@server:~$ sqlite3 /var/lib/myapp/app.db "PRAGMA busy_timeout;"
0
What it means: No busy timeout; writers may fail immediately under contention.
Decision: Set a reasonable busy timeout in the app (or via PRAGMA per connection) and fix transaction length. Busy timeouts are band-aids; long transactions are the infection.
Task 10: MariaDB InnoDB buffer pool size check
cr0x@server:~$ sudo mariadb -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"
Variable_name Value
innodb_buffer_pool_size 1073741824
What it means: 1GB buffer pool. On a 1–2GB system this may be too large once you include OS, page cache, and app memory.
Decision: On small VPSes, start conservative. If swapping occurs, shrink it. RAM you don’t have is not a cache, it’s a timeout generator.
Task 11: MariaDB: are we hitting temporary tables on disk?
cr0x@server:~$ sudo mariadb -e "SHOW GLOBAL STATUS LIKE 'Created_tmp_disk_tables'; SHOW GLOBAL STATUS LIKE 'Created_tmp_tables';"
Variable_name Value
Created_tmp_disk_tables 1842
Variable_name Value
Created_tmp_tables 9621
What it means: A meaningful fraction of temp tables are hitting disk, which can be slow on budget storage.
Decision: Check queries that cause sorts/joins; add indexes; consider increasing tmp table limits only if you have RAM headroom (rare on tiny VPSes).
Task 12: MariaDB: quick slowest queries snapshot (processlist)
cr0x@server:~$ sudo mariadb -e "SHOW FULL PROCESSLIST;"
Id User Host db Command Time State Info
412 app user@localhost mydb Query 12 Sending data SELECT * FROM events WHERE user_id=...
431 app user@localhost mydb Sleep 55 NULL
What it means: One query has been running 12 seconds; many sleeping connections might indicate pool issues.
Decision: Find that query, explain it, add an index or rewrite. For sleepers, reduce pool size or lifetime; stop hoarding idle connections on a tiny box.
Task 13: MariaDB: verify you’re not exposed on the network
cr0x@server:~$ sudo ss -lntp | grep -E '3306|mysqld|mariadbd'
LISTEN 0 80 127.0.0.1:3306 0.0.0.0:* users:(("mariadbd",pid=2481,fd=21))
What it means: Bound to localhost only, which is good for single-host apps.
Decision: If it’s listening on 0.0.0.0, fix bind-address and firewall rules immediately. Don’t “plan to later.” Later is when the credential stuffing starts.
Task 14: SQLite backup the right way (online-safe)
cr0x@server:~$ sqlite3 /var/lib/myapp/app.db ".backup '/var/backups/app.db.bak'"
What it means: SQLite creates a consistent backup even while the DB is in use (within SQLite’s guarantees).
Decision: Use .backup or the backup API, not naive cp, unless you’ve stopped writers and accounted for WAL files.
Task 15: MariaDB logical backup sanity check
cr0x@server:~$ sudo mysqldump --single-transaction --quick --routines mydb | head
-- MySQL dump 10.19 Distrib 10.11.6-MariaDB, for debian-linux-gnu (x86_64)
--
-- Host: localhost Database: mydb
-- ------------------------------------------------------
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
What it means: You can produce a consistent logical dump without locking tables aggressively (for InnoDB).
Decision: If this dump takes too long or impacts performance, you may need better scheduling, or you may be outgrowing the tiny VPS plan.
Task 16: Disk space: are you about to fail in a dumb way?
cr0x@server:~$ df -h /var/lib
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 24G 0.9G 97% /
What it means: You are one log spike away from downtime. Databases hate “disk full.” They will punish you creatively.
Decision: Clean up logs, move backups off-host, expand disk. This is not optional maintenance; it’s uptime.
Fast diagnosis playbook
This is the “stop guessing” sequence. Run it when your tiny VPS app feels slow and people start blaming “the database” like it’s a coworker who never replies.
First: host-level constraints (60 seconds)
- Swap and RAM:
free -h. If swap is active and RAM is low, that’s your top suspect. - I/O wait:
vmstat 1 5oriostat -xz 1 5(if installed). Highwameans storage is part of the story. - Disk space:
df -h. Near-full disks cause stalls, weirdness, and sometimes corruption after crashes.
Second: process-level reality (2–3 minutes)
- Who is using CPU and memory:
pssorted by RSS andtop. If MariaDB dominates RAM on a 1GB host, tune or reconsider the architecture. - Connection count (MariaDB): check
Threads_connectedversus your pool config. - Open file descriptors (both): if your app hits limits, SQLite can fail to open the DB; MariaDB can fail to accept connections.
Third: database-specific bottlenecks (5–15 minutes)
- MariaDB: check processlist for long queries; enable slow query log briefly; look for tmp disk tables and InnoDB log stalls.
- SQLite: verify WAL mode; check busy_timeout and app logs for “database is locked”; identify long transactions; run integrity_check if you suspect corruption.
- App layer: confirm your ORM isn’t doing N+1 queries, and that you’re not starting a transaction per request for no reason.
If you do only one thing: fix swap and I/O wait before you touch SQL tuning. Most “database performance” problems on tiny VPSes are actually “host starvation” problems wearing a SQL costume.
Three corporate mini-stories (from the scar tissue archive)
1) Incident caused by a wrong assumption: “SQLite can’t do concurrency”
A small internal tool lived on a single VM. It handled a modest stream of events, wrote a few rows per second, and served dashboards to maybe a dozen people. It started on SQLite, ran fine, and nobody thought about it for months—until an engineer read a thread claiming “SQLite is not for production.”
They swapped it to MariaDB in a hurry, because production equals client-server, right? They set up a default MariaDB instance, moved the schema, and pointed the app at it. It worked in staging. It worked on their laptop. It worked for a day.
Then the VM began to stall. The dashboards timed out. The job queue piled up. The team chased “slow queries,” added a couple indexes, and got a brief improvement. But the real culprit was simple: the MariaDB instance and the app competed for 2GB RAM, and under load the host swapped. Latency spikes caused retries. Retries caused more connections. More connections caused more memory. It spiraled.
The wrong assumption wasn’t about SQL features. It was about systems behavior. SQLite had been fine because it had a smaller footprint and didn’t trigger swap. MariaDB could have been fine too, but not with the default connection pool and without memory budgeting.
The fix wasn’t heroic: they reverted to SQLite, enabled WAL mode, added a busy timeout, and fixed the app to batch writes. The incident ended not with a scaling breakthrough, but with a reminder that “production-ready” means “operationally appropriate,” not “most popular in job postings.”
2) Optimization that backfired: “Make SQLite faster”
An API on a budget VPS used SQLite. Writes were small but frequent. Someone noticed occasional write latency and decided to “optimize” by changing pragmas: synchronous=OFF, larger cache sizes, aggressive temp storage in memory. Benchmarks on the developer machine looked fantastic. They deployed on Friday, because optimism is an energy source.
For a few days, it did feel faster. Then the VPS had an unplanned reboot—hypervisor maintenance, the kind you don’t get a calendar invite for. The app came back up and started failing requests. The SQLite file wasn’t catastrophically corrupted, but it had inconsistent application-level state. A few critical rows were missing from a table that was supposed to be append-only.
The debugging was ugly because the database was “fine” structurally. No obvious corruption. Just missing writes that never made it to durable storage. That’s the cost of synchronous=OFF: you asked SQLite to lie to you about durability, and it did.
They restored from backups and reprocessed some events. Then they rolled the pragmas back to sane defaults, kept WAL mode, and instead optimized the actual application behavior: fewer commits, more batching, shorter transactions, and proper indexes. The performance stayed good. The data stopped playing vanishing acts.
3) Boring but correct practice that saved the day: “Test the restore”
A small SaaS ran on a single VPS with MariaDB. Nothing fancy: one primary, no replicas, no clusters, no elaborate tooling. But the operator had a ritual: weekly restore tests onto a throwaway VM. Not a theoretical plan, an actual restore.
One morning, the VPS disk began returning I/O errors. The database service started crashing. The filesystem remounted read-only. Support tickets arrived, then stopped, because the app was down. The operator didn’t attempt heroic repairs. They stopped the service, captured what logs they could, and spun up a new VPS.
Because restore tests were routine, the recovery steps were written and correct. They restored last night’s dump, replayed a small set of application events from a queue, and brought the service up. The downtime was annoying, but bounded. No improvisation under stress.
The boring practice wasn’t “use MariaDB.” It was “practice restores.” Databases don’t fail according to your schedule, so your recovery can’t be an untested hypothesis.
Common mistakes: symptoms → root cause → fix
1) “Database is locked” errors in SQLite under light traffic
Symptoms: sporadic 500s, logs show database is locked, spikes during background jobs.
Root cause: long transactions (often “BEGIN; do work; call external service; COMMIT”), no busy timeout, rollback journal mode, or too many writers.
Fix: enable WAL mode; shorten transactions; set busy timeout; batch writes; ensure only one writer path at a time if needed.
2) MariaDB “Too many connections” on a tiny VPS
Symptoms: app errors connecting; DB logs show connection limits; CPU rises during incidents.
Root cause: oversized ORM pools, missing connection reuse, or retries causing storms.
Fix: reduce pool size; implement connection pooling properly; cap retries with backoff; monitor Threads_connected.
3) Periodic 2–10 second stalls on MariaDB writes
Symptoms: latency spikes around commits; CPU is not pegged; users complain of “random slowness.”
Root cause: storage fsync latency spikes, cheap VPS disk, InnoDB log flush behavior.
Fix: move to better storage tier; reduce write amplification (batching); ensure InnoDB log configuration is sane; avoid saturating disk with unrelated jobs at the same time.
4) SQLite “works in dev” but fails in container deploy
Symptoms: cannot open database file, permission denied, or DB file resets unexpectedly.
Root cause: wrong volume mount, wrong UID/GID, or ephemeral filesystem in container.
Fix: explicit persistent volume; correct ownership; set file path via config; ensure atomic deployments don’t replace the DB file.
5) MariaDB uses all RAM slowly over time
Symptoms: memory usage grows; swap begins; performance degrades after days.
Root cause: buffer pool sized too large, per-connection buffers triggered by complex queries, too many concurrent connections.
Fix: shrink buffer pool; reduce concurrency; rewrite expensive queries; monitor temp tables and sort activity.
6) “Backup exists” but restore fails
Symptoms: you run a restore drill and it errors, or the restored data is inconsistent.
Root cause: SQLite backup taken incorrectly while WAL active; MariaDB dump taken without transactional consistency; missing routines; wrong charset/collation assumptions.
Fix: SQLite: use .backup or stop writers and copy DB+WAL; MariaDB: use --single-transaction for InnoDB and test restore regularly.
7) “SQLite is slow” during large reports
Symptoms: long-running SELECT blocks writes; web requests time out.
Root cause: long read transactions holding snapshots; insufficient indexes; large scans on cheap disks.
Fix: add indexes; paginate; move reporting to a replica/ETL file; consider MariaDB if reporting and writes must coexist with higher concurrency.
8) MariaDB upgrade breaks the app unexpectedly
Symptoms: queries behave differently; strictness changes; authentication plugin mismatch.
Root cause: version-specific defaults and compatibility differences between MySQL/MariaDB variants; insufficient staging tests.
Fix: pin versions; test upgrades in staging with real data; record config diffs; keep rollback plan.
Checklists / step-by-step plan
Step-by-step: choosing SQLite on a tiny VPS (the “ship it” path)
- Confirm workload shape: one node, low-to-moderate writes, no need for multi-host writers.
- Enable WAL mode: set
PRAGMA journal_mode=WAL;at DB init. - Set sane durability: prefer
synchronous=FULLfor correctness; considerNORMALonly if you accept data loss on sudden power loss. - Set busy timeout: prevent immediate failures under brief contention; still fix transaction length.
- Design for one writer: serialize write-heavy code paths or batch writes via a queue in-process.
- Backups: use
.backup; ship backups off-host; test restore monthly. - Observability: log query latency in-app; catch “database is locked” errors with context.
Step-by-step: if you insist on MariaDB on a tiny VPS (do it safely)
- Bind to localhost: only expose the DB if you absolutely need remote access.
- Budget RAM: size buffer pool conservatively; leave RAM for OS and app.
- Fix connection pooling: small pool sizes; cap concurrency; avoid per-request connect/disconnect.
- Enable minimal observability: slow query log with a sane threshold during troubleshooting; keep an eye on temp disk tables.
- Backup and restore drills: automate dumps; rotate; test restores to a fresh instance.
- Plan upgrades: pin version; test; schedule; have rollback.
Step-by-step: migration path (SQLite now, MariaDB later) without drama
- Keep SQL portable: avoid SQLite-only quirks if you expect to migrate (e.g., relying on flexible typing as a “feature”).
- Use explicit migrations: version your schema, don’t rely on “just create tables on startup.”
- Abstract DB access: keep a small data access layer; don’t spray raw SQL everywhere unless you enjoy archaeology.
- Export/Import plan: use a repeatable dump format (CSV for tables, or application-level export) and verify row counts and checksums.
- Dual-run briefly: write to one, read from the other only during a planned cutover window, if the app supports it.
- Cut over with a freeze: stop writes, do final sync, switch, validate, then resume.
FAQ
1) Is SQLite “production-ready” for a VPS web app?
Yes, if your workload matches: single node, modest write concurrency, short transactions, WAL mode, and real backups. “Production-ready” is about behavior under failure, not whether it has a daemon.
2) How many concurrent users can SQLite handle?
Wrong unit. Think in concurrent writers and transaction duration. Many readers are fine; many writers with long transactions will hurt. If your app writes on every request, you’re building a write workload whether you meant to or not.
3) Does WAL mode solve SQLite locking?
It improves the common case by allowing readers during writes, but it doesn’t make writes concurrent. You still get one writer at a time. You must still keep transactions short and avoid long-lived locks.
4) Can I put SQLite on NFS to share between servers?
You can, but you’re betting your data on filesystem semantics and network stability. For most small teams, that’s a bad bet. If you need multi-host writers, you’re in MariaDB (or another client-server DB) territory.
5) Why does MariaDB feel slower on a tiny VPS than SQLite?
It’s not always slower, but it can be: more memory overhead, more threads, more fsync pressure, and a network protocol even on localhost. On constrained hosts, overhead becomes user-visible latency.
6) What’s the simplest safe backup for SQLite?
sqlite3 app.db ".backup 'app.db.bak'", then copy the backup off-host. Test restores. If you just copy the live DB file without accounting for WAL, you can get inconsistent backups.
7) What’s the simplest safe backup for MariaDB on a single VPS?
mysqldump --single-transaction for InnoDB tables, scheduled off-peak, rotated, copied off-host, and restore-tested. Physical backups are faster but add operational complexity.
8) When should I move from SQLite to MariaDB?
When you need multi-host access, high write concurrency, richer operational tooling, or you’re spending time engineering around SQLite’s “one writer” constraint. Also when you need to scale beyond a single node reliably.
9) Is MariaDB safer than SQLite?
Not automatically. Both can be safe or unsafe depending on configuration and operational discipline. SQLite’s risks are usually around locking and backup correctness; MariaDB’s risks are often around resource tuning, exposure, and operational complexity.
10) Can I use both?
Yes, and sometimes it’s the cleanest approach: SQLite for local state and queues; MariaDB for shared transactional data. Just be honest about the complexity you’re adding.
Next steps you can do today
- If you’re undecided: measure your write concurrency and transaction length. That’s the hinge variable. Not “features.”
- If you’re on MariaDB on a tiny VPS: check swap, buffer pool size, and connection counts. Cap pools. Bind to localhost. Enable slow query logging briefly when troubleshooting.
- If you’re on SQLite: enable WAL mode, set a busy timeout, and audit your code for long transactions. Implement the
.backuproutine and run a restore test. - If you’re planning to grow: keep your schema migrations disciplined and your SQL portable so moving to MariaDB later is a planned change, not a midnight rewrite.
The “right” database for a tiny VPS project is the one that stays out of your way while still being honest about failure. SQLite is often that database. MariaDB is excellent when you actually need it. Don’t pay the operational tax early unless you’re sure you’ll use the services you bought.