Proxmox “VM is locked (backup/snapshot)”: how to remove a lock safely

Was this helpful?

You go to reboot a VM. Or move it. Or take a quick snapshot before a risky change. Proxmox replies with the kind of message that feels polite but absolute: “VM is locked (backup/snapshot)”. Production is waiting. Your change window is shrinking. Someone is asking if you can “just unlock it.”

You can. But you shouldn’t—at least not until you’ve proven the lock is stale. In Proxmox, locks are there to prevent you from doing the one thing that keeps SREs employed: corrupting state by racing a snapshot/backup/replication with a destructive operation.

What the lock actually means (and what it doesn’t)

A Proxmox VM lock is a coordination mechanism. It’s a small piece of metadata that says: “Something is doing an operation that must not overlap with other operations.” Typically that “something” is one of these:

  • backup (vzdump, PBS, or third-party integration)
  • snapshot (qemu snapshot and/or storage snapshot)
  • clone (often snapshot-based)
  • migration (live or offline)
  • replication (Proxmox replication jobs)

The message usually appears when you try to:

  • start/stop/reboot a VM (sometimes allowed, often blocked depending on operation)
  • delete a VM
  • remove a disk
  • take another snapshot
  • move storage

The core rule: unlock only when you can prove nothing is actually running

A stale lock is annoying. A premature unlock can be expensive. If a backup is actively streaming disk state and you unlock + delete the snapshot under it, you’re creating a “backup” that is neither consistent nor honest. It will restore into a VM-shaped mystery.

Locks can also be “true” even if nothing seems active on the GUI. Maybe the Proxmox worker process died, the node rebooted mid-job, storage hiccupped, or a task got stuck in an uninterruptible kernel wait. The UI isn’t lying; it’s just not always telling you why.

One dry comfort: locks are rarely the root problem. They’re the symptom. The root problem is usually storage latency, a wedged backup process, a snapshot that can’t be committed, or an agent freeze that never thawed.

Joke #1: A Proxmox lock is like a “Do Not Disturb” sign on a hotel door—except the housekeeping cart is your storage array and it weighs several tons.

Fast diagnosis playbook (first/second/third checks)

This is the “stop guessing” loop. The goal: decide quickly whether you should wait, cancel, or unlock.

First: confirm the lock type and find the task that set it

  1. Identify VMID and lock type from the error (backup/snapshot/migrate/clone/replicate).
  2. Check recent tasks on the node and cluster.
  3. Look for a running vzdump, pvescheduler, pveproxy, pvestatd, or pve-storage worker tied to that VMID.

Second: prove whether work is still happening

  1. Check I/O on the storage backing the VM (ZFS zpool iostat, Ceph rbd status, NFS mount stats).
  2. Check QEMU process and whether it’s blocked (ps, top, and kernel “D state” hints).
  3. Check snapshot/backup artifact progress (PBS datastore chunk ingest, vzdump temp files, ZFS snapshot presence).

Third: decide the safest resolution path

  • If a task is still making progress: wait. If it’s mission-critical, move the human problem (change window) instead of the disk problem.
  • If a task is stuck but cancelable: stop the job cleanly (kill the right worker, cancel the task, remove a stale PID reference).
  • If everything is dead and only the lock remains: unlock, then immediately validate VM disks and snapshot state.

If you’re stuck choosing between “unlock” and “do nothing,” choose “do nothing” until you have evidence. Locks are cheap; data recovery isn’t.

Interesting facts and history that explain today’s behavior

  • Fact 1: Proxmox VM config is plain text, stored under /etc/pve/ on a distributed filesystem (pmxcfs). That makes locks easy to represent as a line in a config file.
  • Fact 2: pmxcfs is a cluster filesystem backed by Corosync; it’s designed for small config/state, not bulk data. When it’s unhappy, config writes (including lock updates) can lag or fail.
  • Fact 3: The “backup” lock is usually set by vzdump (classic) or backup-related workers; it’s intended to prevent overlapping snapshot/backup operations that would break consistency.
  • Fact 4: Snapshots in Proxmox can be “QEMU internal,” “storage snapshots,” or a mix—depending on storage type and config. That’s why clearing a lock without clearing storage state can leave you with a snapshot you can’t delete.
  • Fact 5: Guest-agent “freeze/thaw” is used to improve filesystem consistency, but if the agent is missing or hung, the snapshot orchestration can stall in surprising places.
  • Fact 6: QEMU snapshot operations depend on QMP (QEMU Machine Protocol). If QEMU is wedged or QMP stops responding, Proxmox can’t complete the snapshot workflow, and the lock may remain.
  • Fact 7: “Stuck backup” often isn’t a backup problem. It’s an I/O latency problem. A slow or failing disk can make a snapshot commit look like “nothing is happening.” It is happening, just glacially.
  • Fact 8: On CoW storages (ZFS, Ceph), snapshots are cheap to create and sometimes expensive to delete—especially if lots of blocks changed after the snapshot.
  • Fact 9: Proxmox historically supported multiple storage backends with different semantics (LVM-thin, ZFS, RBD, NFS). Locks are a unifying abstraction over messy reality.

Where Proxmox stores locks and how they get stuck

The lock is usually a line in the VM config

Most of the time, the lock is recorded in the VM’s configuration file under pmxcfs, e.g.:

  • /etc/pve/qemu-server/101.conf for VMID 101
  • /etc/pve/lxc/202.conf for container 202 (containers have locks too)

You’ll typically see something like:

  • lock: backup
  • lock: snapshot
  • lock: migrate

Proxmox sets the lock at the start of a sensitive operation, then clears it at the end. If the worker crashes, the node reboots, pmxcfs can’t commit the config change, or the job is killed in a bad moment, the “clear lock” step never runs.

Why a stale lock is safer than an optimistic system

Proxmox chooses the conservative failure mode: if it isn’t sure, it blocks. That annoys humans, but it prevents you from doing destructive operations while there might be a snapshot merge in flight.

Here’s the operational reality: the “lock” is not a kernel lock. It does not guarantee a snapshot is actually active. It only means: at some point, Proxmox believed it was active, and it hasn’t recorded completion.

What “safe unlock” really means

Safe unlock is not “run qm unlock and move on.” Safe unlock is:

  1. Find the job that set the lock.
  2. Confirm it is not running anymore (or is stuck beyond recovery).
  3. Confirm storage layer is not mid-transaction (snapshot commit, replication send, RBD flatten, etc.).
  4. Remove the lock.
  5. Validate that the VM can start and snapshot operations are sane again.

Practical tasks: commands, outputs, and decisions (12+)

All examples assume VMID 101 on node pve1. Adjust to your environment. The commands are boring on purpose; boring is how you keep data.

Task 1: Confirm the lock and its type

cr0x@server:~$ qm config 101 | grep -E '^lock:|^name:|^scsi|^virtio'
name: app-prod-01
lock: backup
scsi0: local-zfs:vm-101-disk-0,size=120G

What it means: Proxmox believes a backup workflow is active or unfinished for VM 101.

Decision: Don’t unlock yet. Find the backup task and confirm status.

Task 2: Find recent tasks referencing the VMID

cr0x@server:~$ pvesh get /nodes/pve1/tasks --limit 20 --output-format json-pretty | grep -n "101"
42:     "id": "UPID:pve1:0000A1B2:0123ABCD:676D7B22:vzdump:101:root@pam:",

What it means: There was a vzdump job for VM 101.

Decision: Pull the log and see whether it finished, failed, or is still running.

Task 3: Read the task log (fastest truth in Proxmox)

cr0x@server:~$ pvesh get /nodes/pve1/tasks/UPID:pve1:0000A1B2:0123ABCD:676D7B22:vzdump:101:root@pam:/log
INFO: starting new backup job: vzdump 101 --storage backup-nfs --mode snapshot --compress zstd
INFO: creating Proxmox backup snapshot
INFO: status = running

What it means: Task says “running” and hasn’t logged completion. This could be genuinely running or stuck.

Decision: Verify whether the process exists and whether it’s making progress.

Task 4: Check whether a vzdump worker is actually running

cr0x@server:~$ ps aux | egrep 'vzdump|pvescheduler|pve-backup' | grep -v grep
root      41233  0.5  0.2  24572  9120 ?        Ss   01:12   0:03 vzdump 101 --storage backup-nfs --mode snapshot --compress zstd

What it means: A vzdump process is alive.

Decision: Do not unlock. Next: determine if it’s progressing or hung.

Task 5: Watch backup temp files for movement (classic vzdump behavior)

cr0x@server:~$ ls -lh /var/lib/vz/dump | tail -n 5
-rw-r--r-- 1 root root  88G Dec 26 01:18 vzdump-qemu-101-2025_12_26-01_12_04.vma.zst.tmp

What it means: There’s a temp file. If its size increases over time, backup is progressing.

Decision: Sample the size twice (or use watch). If growth continues, wait. If it stalls, investigate storage and process state.

Task 6: Check whether the temp file is still growing

cr0x@server:~$ stat -c '%n %s %y' /var/lib/vz/dump/vzdump-qemu-101-2025_12_26-01_12_04.vma.zst.tmp
/var/lib/vz/dump/vzdump-qemu-101-2025_12_26-01_12_04.vma.zst.tmp 95182411264 2025-12-26 01:18:27.000000000 +0000

What it means: This is one point-in-time. Run it again after 30–60 seconds.

Decision: If size and mtime don’t change, treat the job as possibly stuck and move to storage checks.

Task 7: Check VM process health and QEMU responsiveness

cr0x@server:~$ qm status 101
status: running

What it means: VM is still running; snapshot mode backups often keep VM running while snapshotting.

Decision: If backup is stuck, don’t reboot the VM “to fix it.” That’s how you turn a stuck backup into a stuck VM plus a stuck backup.

Task 8: Look for kernel “D state” on backup or QEMU threads

cr0x@server:~$ ps -eo pid,stat,cmd | egrep 'vzdump 101|kvm -id 101' | head
41233 D    vzdump 101 --storage backup-nfs --mode snapshot --compress zstd
20987 Sl   /usr/bin/kvm -id 101 -name app-prod-01 ...

What it means: The vzdump process is in D state (uninterruptible sleep), usually waiting on I/O (NFS, disk, ZFS, Ceph, etc.). Killing it may not work until I/O returns.

Decision: Don’t start with kill -9. First find the I/O bottleneck.

Task 9: Identify which storage the VM disk lives on

cr0x@server:~$ qm config 101 | grep -E '^(scsi|virtio|sata)[0-9]+:'
scsi0: local-zfs:vm-101-disk-0,size=120G

What it means: Disk is on ZFS storage local-zfs. Snapshot/delete semantics depend on ZFS.

Decision: Check ZFS pool health and snapshot state before you consider unlocking.

Task 10: Check ZFS pool health (fast corruption smoke test)

cr0x@server:~$ zpool status -x
all pools are healthy

What it means: ZFS isn’t reporting known errors.

Decision: If pool is unhealthy, stop here and treat it as a storage incident. Unlocking won’t save you.

Task 11: Check ZFS I/O and latency

cr0x@server:~$ zpool iostat -v 1 3
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
rpool       1.10T  700G     12    340   2.1M  110M
  mirror    1.10T  700G     12    340   2.1M  110M
    sda         -      -      6    170   1.0M   55M
    sdb         -      -      6    170   1.1M   55M

What it means: Writes are happening; backup may be progressing (snapshot commit, zvol activity, etc.). This doesn’t guarantee it’s the backup, but it argues against “nothing is going on.”

Decision: If you see steady I/O and no errors, prefer waiting over forceful intervention.

Task 12: Check whether Proxmox snapshot metadata exists

cr0x@server:~$ qm listsnapshot 101
`-> pre-backup-2025-12-26 2025-12-26 01:12:10

What it means: A snapshot exists. Backups in snapshot mode often create a snapshot and later delete/commit it.

Decision: If backup is stuck and the snapshot remains, you’ll need to handle snapshot cleanup cautiously after stopping the job.

Task 13: Look for lock in the VM config file (don’t edit yet, just observe)

cr0x@server:~$ grep -n '^lock:' /etc/pve/qemu-server/101.conf
7:lock: backup

What it means: The lock is recorded in config state.

Decision: Prefer qm unlock 101 over manual file edits. Manual edits are for when pmxcfs is broken or you’re in incident mode with a change record.

Task 14: Cancel a running task cleanly (when it is safe and you must)

If the job is truly stuck and business demands action, you can stop the task. First try a gentle termination.

cr0x@server:~$ kill -TERM 41233
cr0x@server:~$ sleep 2
cr0x@server:~$ ps -p 41233 -o pid,stat,cmd
  PID STAT CMD
41233 D    vzdump 101 --storage backup-nfs --mode snapshot --compress zstd

What it means: Still in D state; it can’t handle signals because it’s blocked on I/O.

Decision: Do not escalate to kill -9 as your “plan.” Fix the I/O condition (NFS down, storage path flapping, Ceph stuck, etc.) or accept waiting.

Task 15: Once the task is gone, remove the lock the supported way

cr0x@server:~$ qm unlock 101

What it means: The lock line should be removed from config if no internal constraints block it.

Decision: Immediately re-check config and snapshot state. If snapshots exist, deal with them before declaring victory.

Task 16: Confirm lock removed and VM operations work again

cr0x@server:~$ qm config 101 | grep '^lock:' || echo "no lock set"
no lock set

What it means: Proxmox no longer blocks operations based on that lock.

Decision: Validate snapshot list, run a small operation (like qm status, qm agent ping if available), and plan a fresh backup soon.

Storage-specific gotchas (ZFS, LVM-thin, Ceph RBD, NFS)

ZFS: “Snapshot creation was instant; deletion is taking forever”

ZFS snapshots are metadata references. Creating them is typically instant. Deleting them can require freeing a lot of blocks, especially if the VM wrote heavily after the snapshot. That “stuck backup” might be in the snapshot cleanup phase.

What to do: confirm I/O, confirm snapshot presence, and only unlock after you’ve stopped the job and ensured no ZFS send/receive or snapshot destroy is actively running.

LVM-thin: merges and metadata exhaustion

LVM-thin snapshots and merges are a different beast. If the thin pool is short on metadata, merges can crawl or fail. You may get a lock that feels unrelated to the actual issue.

What to do: check thin pool usage and metadata before you take actions that force merges. If metadata is tight, your correct “unlock” move might be “extend thin pool metadata” and let it complete.

Ceph RBD: locks aren’t the only locks

Ceph has its own watch/lock concepts, plus RBD snapshot and flatten operations. Proxmox may be blocked because Ceph is slow or because an RBD operation is pending on the cluster.

What to do: verify RBD status and watch for stuck operations. A Proxmox unlock does not cancel a Ceph operation.

NFS backup target: your backup is only as alive as your mount

A common failure mode: NFS hiccup. The vzdump process goes into D state, Proxmox keeps the lock, and your GUI shows a “running” task forever. The node might look fine. It isn’t.

What to do: treat the NFS mount as a dependency. Check mount stats, look for server timeouts, and be prepared to resolve the storage path rather than “fixing Proxmox.”

Three corporate mini-stories from the lock trenches

Mini-story 1: The incident caused by a wrong assumption

The team had a tidy belief: “If the Proxmox GUI doesn’t show a running task, nothing is running.” It wasn’t an unreasonable assumption—until they hit a node with intermittent pmxcfs latency during a busy morning.

A VM was locked “(snapshot).” The engineer on-call checked the GUI task list, saw nothing obvious, and ran qm unlock. The VM started, which felt like a win. Then they deleted a snapshot that “must have been old.”

What was actually happening: a snapshot commit was still in progress at the storage layer. The management view didn’t reflect it cleanly because the worker that initiated it had died, and the cluster filesystem was slow to update task state. The delete raced the commit, and the VM disk ended up with inconsistent snapshot metadata in Proxmox and a half-finished state on storage.

The next restore test failed in a way that was painfully subtle: the VM booted, but the application data was inconsistent across tables. No obvious corruption. Just wrongness. It took longer to detect than to cause.

The lasting fix was cultural: the runbook changed from “unlock if GUI is quiet” to “unlock only after you’ve proven the worker is gone and storage isn’t mid-flight.” They also added a habit of checking tasks via CLI and verifying snapshot lists before touching anything destructive.

Mini-story 2: The optimization that backfired

A platform team wanted faster backups. They moved from stop-mode backups to snapshot-mode backups everywhere, enabled compression, and kicked all jobs off at the same time “to finish before business hours.” On paper it was clean: less downtime, faster backups, happier people.

In reality, snapshot-mode backups shifted the load, not eliminated it. The backup window now hammered storage and network concurrently. On the worst mornings, Ceph latency spiked, guest agent freezes timed out on a few VMs, and several backups stalled mid-snapshot cleanup.

By 9 a.m., a handful of VMs were locked. The knee-jerk reaction was to unlock them so teams could deploy. That “fix” worked—until the backup system started reporting unusable archives because the snapshot chain was a mess.

The backfiring part wasn’t snapshot-mode itself. It was the concurrency. They treated backups as CPU jobs when they were storage jobs. After they staggered schedules, limited parallelism per storage backend, and set explicit timeouts/alerts for “lock older than X,” the lock incidents almost disappeared.

Mini-story 3: The boring but correct practice that saved the day

A finance-adjacent environment had exactly one virtue: discipline. Every change required a ticket. Every ticket required a pre-check. And every pre-check required verifying backups and snapshot state.

One day, a VM showed “locked (backup).” The on-call engineer didn’t unlock it. They ran the task log, saw a backup stuck in D state, and escalated to storage. Storage found an NFS server doing a kernel update with an overly optimistic reboot plan.

Once NFS recovered, the vzdump process finished, the lock cleared automatically, and the snapshot disappeared as expected. No manual unlock needed. The change window moved, people complained, and the VM data stayed intact.

That story doesn’t sound heroic because it isn’t. It’s what you want: boring, repeatable operations that protect you from your own urgency.

Checklists / step-by-step plans

Plan A: The safe default (most cases)

  1. Identify the lock: qm config <vmid> | grep '^lock:'
  2. Find the task: query recent tasks on the node; locate the UPID for vzdump/snapshot/migrate/replicate.
  3. Read the task log: if it’s still logging and progressing, wait.
  4. Validate process state: confirm whether a worker process exists; check for D state.
  5. Validate storage health: ZFS/Ceph/LVM/NFS checks appropriate to the backend.
  6. If progressing: wait and communicate.
  7. If stuck due to dependency: fix the dependency (storage/network), not the lock.
  8. If task is dead and storage is stable: unlock with qm unlock.
  9. Post-unlock validation: check snapshots; start VM; run a small I/O validation; schedule a new backup.

Plan B: You must intervene (stuck job, business impact)

  1. Confirm the VM is not in the middle of migration/replication. If it is, stop and reassess.
  2. Confirm no backup process is actively writing output (temp file growth, storage bandwidth).
  3. Try a gentle cancel/termination of the worker process or task (not kill -9 first).
  4. If process is D state: fix I/O path. Killing won’t reliably work until the kernel call returns.
  5. After the worker is gone and storage is stable, unlock with qm unlock <vmid>.
  6. Clean up snapshot artifacts (delete leftover snapshots only after you know no merge/commit is running).
  7. Document what happened. You’ll forget the details faster than you think.

Plan C: pmxcfs or cluster state is broken (rare, spicy)

  1. Confirm cluster quorum and pmxcfs health on the node.
  2. Avoid editing /etc/pve/ directly unless you understand the blast radius.
  3. Restore cluster health first. If config state can’t be written, unlock commands may “succeed” in spirit but not in reality.
  4. Once pmxcfs is stable, re-run the normal workflow.

Joke #2: “Just unlock it” is the virtualization equivalent of “just reboot prod”—sometimes correct, often career-limiting.

Common mistakes: symptom → root cause → fix

1) “Lock won’t clear, but backup looks finished”

Symptom: vzdump archive exists; GUI still says locked (backup).

Root cause: task failed during cleanup or post-backup steps; lock line wasn’t removed due to worker crash or pmxcfs write failure.

Fix: verify no job is running; confirm snapshot list is clean; run qm unlock; then validate backups with a restore test (not just “file exists”).

2) “I unlocked it, now snapshots can’t be deleted”

Symptom: snapshot delete fails or hangs; Proxmox shows inconsistent snapshot tree.

Root cause: you unlocked while storage-layer snapshot commit/merge was still running, or you left behind storage snapshots that Proxmox no longer tracks cleanly.

Fix: re-check storage operations; for ZFS, list snapshots and assess whether destroy is in progress; for Ceph, check RBD status/operations; only then remove or reconcile snapshots.

3) “Backup task is running forever and can’t be killed”

Symptom: process shows D state; signals don’t work.

Root cause: blocked I/O (NFS server unreachable, disk path issues, hung Ceph, or kernel waiting on storage).

Fix: fix the I/O dependency. If NFS, restore connectivity or remount after stabilizing; if Ceph, fix cluster health; if local disks, check hardware and dmesg. Then let the process return and exit.

4) “VM locked (snapshot) after guest-agent freeze”

Symptom: snapshot operation started; VM stays locked; guest appears frozen.

Root cause: QEMU guest agent didn’t respond to thaw, or freeze timed out mid-workflow.

Fix: check agent status inside the VM and on Proxmox; consider disabling filesystem freeze for that workload or fixing agent service; only unlock after you confirm no snapshot merge is pending.

5) “Locks keep happening every night at the same time”

Symptom: recurring lock incidents during backup window.

Root cause: too much concurrency, backup target saturation, or periodic storage maintenance colliding with backups.

Fix: stagger jobs, set realistic bandwidth limits, reduce parallel backups per node/storage, and add alerting for tasks exceeding expected runtime.

6) “qm unlock works, but GUI still shows locked”

Symptom: CLI claims success; UI still blocks operations.

Root cause: pmxcfs/cluster state propagation delay or node GUI caching; sometimes you unlocked on a non-authoritative node during a cluster hiccup.

Fix: verify the config file on the cluster filesystem, confirm quorum, and re-check from the node hosting the VM. If cluster is unhealthy, fix that first.

FAQ (real questions people ask at 2 a.m.)

1) Is it always safe to run qm unlock <vmid>?

No. It’s safe when the lock is stale: no worker running, no storage operation in flight, and you’ve verified snapshots/backup state. Otherwise it’s roulette with better branding.

2) What’s the difference between “locked (backup)” and “locked (snapshot)”?

“backup” typically means a backup workflow (often snapshot-based) is active. “snapshot” usually means a snapshot operation itself is active (create/delete/rollback). Either can involve storage-layer snapshots.

3) Can I just delete the lock: line from /etc/pve/qemu-server/<vmid>.conf?

You can, but you shouldn’t unless you have to. Use qm unlock so Proxmox does the right internal bookkeeping. Manual edits are for when management services are broken and you have an incident plan.

4) The task says “running” but there’s no process. What now?

Assume the worker died. Verify no storage activity is ongoing and check whether snapshots exist. If it’s truly dead and stable, unlock and then clean up snapshots carefully.

5) Why does a backup lock block a reboot or shutdown?

Because snapshot-mode backups rely on consistent disk state, and reboot/shutdown can interrupt snapshot creation/commit. Proxmox blocks to keep the storage workflow coherent.

6) How do I know if a backup is “making progress”?

Check task logs for ongoing output, check temp files growing, and check storage I/O stats. If everything is flat for a long time and you see D state, suspect blocked I/O.

7) If I unlock, will it cancel the backup/snapshot?

No. Unlocking only removes Proxmox’s guardrail. The underlying process (if still running) will keep running, and storage operations might still be in progress. That’s why unlocking early is dangerous.

8) What should I do after unlocking a stale lock?

Validate snapshot list (qm listsnapshot), ensure VM starts, check storage health, and schedule a new backup soon. Also review why the task got stuck so it doesn’t recur.

9) Can a cluster quorum issue cause stuck locks?

Yes. If pmxcfs can’t commit config updates due to quorum or filesystem issues, tasks may not record completion properly. Fix cluster health first; otherwise you’ll fight ghosts.

10) What’s the fastest “I’m about to do something risky” sanity check?

Find the UPID, read its log, and confirm whether a related process exists. If you can’t tie the lock to a dead task, you’re not ready to unlock.

Next steps you can apply today

When Proxmox says “VM is locked (backup/snapshot),” treat it like a safety interlock, not an inconvenience. Your job isn’t to clear the message; your job is to keep state consistent.

Do this next, in order:

  1. Operationalize the fast diagnosis playbook: task log → process state → storage state.
  2. Standardize “when unlock is allowed”: no running worker, no ongoing storage ops, snapshot list understood.
  3. Reduce recurrence: stagger backups, cap concurrency per storage backend, and alert on tasks exceeding expected runtime.
  4. Practice restores: the real measure of backup success is a restore test, not a file on disk.

One quote to keep you honest—paraphrased idea from Richard Cook (resilience engineering): Success and failure often come from the same everyday work; reliability is how you manage that work.

← Previous
VR Gaming GPU Requirements Are Totally Different: A Production Engineer’s Guide
Next →
ZFS zfs diff: Finding Exactly What Changed Between Snapshots

Leave a comment