Proxmox “qemu-img: Could not create”: permissions, paths, and filesystem fixes that actually work

Was this helpful?

It always happens when you’re feeling good. You click Create VM or add a disk, and Proxmox responds with a deadpan: qemu-img: Could not create. No disk. No VM. Just vibes.

This error is rarely “a QEMU problem.” It’s your storage stack telling you something is off: a path that doesn’t exist, a mount that isn’t mounted, a filesystem that’s read-only, permissions that don’t match Proxmox’s expectations, or a thin pool that ran out of metadata. The fix is usually boring. The trick is finding which boring thing it is fast.

What the error really means (and what it doesn’t)

When Proxmox creates a VM disk, it usually ends up calling qemu-img (for file-based images like qcow2/raw), or it asks storage tooling to create a block device (ZFS zvol, LVM logical volume, Ceph RBD). If that creation step fails, you’ll often see:

  • “qemu-img: Could not create …”
  • “Permission denied” (explicit, if you’re lucky)
  • “No such file or directory” (path or mount problem)
  • “Read-only file system” (filesystem or NFS export mounted ro)
  • “No space left on device” (data blocks, metadata blocks, inodes, thin pool metadata, or quota)

Here’s what it usually is not:

  • Not a “random Proxmox bug.” This is almost always deterministic.
  • Not fixed by restarting pvedaemon “just because.” Restarting can make you feel productive, though.
  • Not solved by chmod 777 on everything (unless your goal is to create future incidents).

The practical mindset: treat it like a storage write failure. Your job is to answer three questions:

  1. Where is Proxmox trying to create the disk?
  2. Who is writing it (process/user/uid mapping), and does it have rights?
  3. What is the backing store state (mounted, writable, healthy, has space, supports features)?

One quote that holds up in ops: “Hope is not a strategy.” — General Gordon R. Sullivan. When qemu-img fails, hope is especially expensive.

Joke #1: Storage is the only department where “I can’t create a file” is considered a detailed error message.

Fast diagnosis playbook (check first/second/third)

First: confirm the target storage and the exact path/device

From the Proxmox task log, find the storage ID and the path it tried to create. If you’re on directory storage, you’ll see something like /mnt/pve/fastssd/images/101/vm-101-disk-0.qcow2. If it’s ZFS/LVM, you’ll see a zvol or LV name.

Second: validate the mount and writeability in one minute

Most “Could not create” failures are a missing mount, a mount that’s actually the underlying empty directory, or a filesystem flipped to read-only due to errors. You can detect all three quickly with findmnt, mount, and a tiny write test.

Third: validate space the right way (data, inodes, thin metadata, quotas)

Disk-full is not one thing. It’s: blocks, inodes, ZFS quota/refquota, LVM thin metadata, project quotas, NFS user quotas. Check the relevant counters for the storage type.

Fourth: check permissions and identity (UID/GID, root_squash, ACLs)

On local filesystems, Proxmox typically writes as root (via pvedaemon/pveproxy calling tools). On NFS/CIFS it can get weird: root becomes nobody, ACL inheritance does something “helpful,” or SMB maps you to a guest user. When it smells like permissions, verify with namei, stat, and a write test from the host.

Fifth: check the storage plugin config and content types

Misconfigured content types (e.g., “VZDump backup only” but you try to store images), wrong path, wrong pool name, or stale storage config on a cluster node will all surface as creation failures.

Interesting facts and context (why this keeps happening)

  • Fact 1: qemu-img predates Proxmox and is used broadly across virtualization stacks. Proxmox is often just the messenger.
  • Fact 2: Proxmox’s “Directory” storage is deceptively simple: it’s just a filesystem path, which means mounts and permissions are your problem by design.
  • Fact 3: NFS “root_squash” exists specifically to prevent remote root from being root on the server. It’s also a top-3 reason VM disk creation fails on NFS.
  • Fact 4: LVM-thin can be out of metadata while still having plenty of data space. The error you see often lies about what’s full.
  • Fact 5: ZFS can fail creates due to quotas, reservation conflicts, or a pool that is technically online but out of allocatable space due to fragmentation and recordsize choices.
  • Fact 6: Ext4 and XFS behave differently when “full”: inode exhaustion is more common on ext4 in small-file workloads; XFS can hit project quota limits in surprising ways.
  • Fact 7: AppArmor can block QEMU operations even when Unix permissions look correct, especially with custom mountpoints not covered by profiles.
  • Fact 8: A missing mount can be invisible if the mountpoint directory exists. You’ll write into the local root filesystem and only notice when it fills. This has caused real outages in respectable companies.

Task-driven diagnosis: commands, outputs, decisions

Below are practical tasks. Each includes the command, a realistic output sketch, and the decision you make. Run them from the Proxmox host that is failing, not from your laptop, not from “the other node that works.”

Task 1: Read the Proxmox task log for the real target

cr0x@server:~$ tail -n 60 /var/log/pve/tasks/active
UPID:pve1:0000A1B2:0001C3D4:676D1A2B:qmcreate:101:root@pam:
error: unable to create VM 101 - qemu-img: Could not create '/mnt/pve/fastssd/images/101/vm-101-disk-0.qcow2': Permission denied

What it means: The failing path is explicit. That’s gold. If the message doesn’t show the path, look at the full task log in the GUI or /var/log/pve/tasks.

Decision: Copy the exact path/storage ID. Everything else depends on it.

Task 2: Confirm the storage definition Proxmox is using

cr0x@server:~$ pvesm status
Name     Type     Status     Total      Used      Available  %
local    dir      active     98.00GB    12.10GB    80.90GB   12.35%
fastssd  dir      active     1.80TB     1.22TB     0.58TB    67.78%
zfspool  zfspool  active     3.50TB     2.10TB     1.40TB    60.00%

What it means: Proxmox thinks fastssd is active. That does not guarantee it’s correctly mounted or writable.

Decision: Keep going. “Active” is necessary, not sufficient.

Task 3: Verify the mount is real (not an empty directory)

cr0x@server:~$ findmnt -T /mnt/pve/fastssd
TARGET           SOURCE        FSTYPE OPTIONS
/mnt/pve/fastssd /dev/sdb1     ext4   rw,relatime,errors=remount-ro

What it means: Good: it’s a real mount from /dev/sdb1. If this output is empty, you’re writing to a plain directory on the root filesystem.

Decision: If it’s not mounted, fix fstab/systemd mount and remount before doing anything else.

Task 4: Check whether it silently flipped read-only

cr0x@server:~$ mount | grep -E '/mnt/pve/fastssd| / '
/dev/sdb1 on /mnt/pve/fastssd type ext4 (ro,relatime,errors=remount-ro)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)

What it means: The storage is mounted ro. Proxmox will fail to create anything, and the error might show “Permission denied” or “Read-only file system” depending on the code path.

Decision: Stop trying to create disks. Investigate the filesystem error cause, run fsck (if ext4), or fix the underlying issue, then remount rw.

Task 5: Do a minimal write test as the host (the only truth)

cr0x@server:~$ touch /mnt/pve/fastssd/.pve-write-test && echo OK
touch: cannot touch '/mnt/pve/fastssd/.pve-write-test': Read-only file system

What it means: This is not a Proxmox permissions problem. This is “filesystem is not writable.”

Decision: Fix mount/FS state first. Permissions are irrelevant until writes succeed.

Task 6: Check space and inodes (yes, both)

cr0x@server:~$ df -h /mnt/pve/fastssd
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       1.8T  1.8T     0 100% /mnt/pve/fastssd

cr0x@server:~$ df -i /mnt/pve/fastssd
Filesystem      Inodes   IUsed   IFree IUse% Mounted on
/dev/sdb1      122M     1.1M    121M    1% /mnt/pve/fastssd

What it means: Blocks are full, inodes are fine. If blocks were fine but inodes were 100%, you’d see similar creation failures.

Decision: Free space or expand the filesystem. Don’t “just try again.” It won’t get better.

Task 7: Confirm directory ownership and traverse permissions on the full path

cr0x@server:~$ namei -l /mnt/pve/fastssd/images/101
f: /mnt/pve/fastssd/images/101
drwxr-xr-x root root /
drwxr-xr-x root root mnt
drwxr-xr-x root root pve
drwxr-x--- root root fastssd
drwxr-xr-x root root images
drwx------ 100000 100000 101

What it means: The VM directory is owned by UID 100000, which is common when this is a shared mount created by something else or when user namespace mappings are in play. Also note fastssd is drwxr-x---; only root and group root can traverse.

Decision: If Proxmox is writing as root, root can traverse; but if any component drops privileges or uses mapped IDs, you may need to fix ownership/ACLs. If this is NFS with root_squash, root won’t be root.

Task 8: Reproduce as “root” and interpret permission errors properly

cr0x@server:~$ sudo -i
root@server:~# touch /mnt/pve/fastssd/images/101/.root-test
touch: cannot touch '/mnt/pve/fastssd/images/101/.root-test': Permission denied

What it means: If even local root can’t write, then either:

  • it’s a remote filesystem mapping root to nobody (NFS root_squash),
  • ACLs deny write,
  • immutable attributes exist,
  • or the filesystem is mounted with restrictive options.

Decision: For NFS/CIFS, go inspect export/mount options. For local FS, check ACLs and attributes.

Task 9: Inspect ACLs and immutable flags when permissions “look fine”

cr0x@server:~$ getfacl -p /mnt/pve/fastssd/images/101 | sed -n '1,25p'
# file: /mnt/pve/fastssd/images/101
# owner: 100000
# group: 100000
user::rwx
group::---
other::---

cr0x@server:~$ lsattr -d /mnt/pve/fastssd/images/101
-------------------P-- /mnt/pve/fastssd/images/101

What it means: ACLs here deny group/other completely. That’s fine if the writer matches owner. But if NFS squashes root to nobody, it won’t match. Attributes look normal (no i immutable flag).

Decision: Align ownership with the effective writer identity, or adjust export options and mount semantics.

Task 10: Check NFS mount options and root squashing symptoms

cr0x@server:~$ findmnt -T /mnt/pve/nfs-store -o TARGET,SOURCE,FSTYPE,OPTIONS
TARGET            SOURCE                    FSTYPE OPTIONS
/mnt/pve/nfs-store 10.0.20.10:/exports/pve nfs4   rw,relatime,vers=4.1,sec=sys,clientaddr=10.0.20.21

cr0x@server:~$ sudo -i
root@server:~# touch /mnt/pve/nfs-store/.nfs-root-test
touch: cannot touch '/mnt/pve/nfs-store/.nfs-root-test': Permission denied

root@server:~# stat -c '%u %g %n' /mnt/pve/nfs-store
65534 65534 /mnt/pve/nfs-store

What it means: UID/GID 65534 is the classic “nobody/nogroup” mapping. This strongly suggests root_squash or anonymous mapping on the NFS server.

Decision: Fix NFS exports: either set correct ownership on the export to match the mapped UID, or adjust export options (carefully) so Proxmox can write.

Task 11: Check LVM-thin health (data vs metadata)

cr0x@server:~$ lvs -a -o +devices,lv_size,data_percent,metadata_percent,lv_attr vg0
  LV               VG  Attr       LSize   Data%  Meta%  Devices
  thinpool         vg0 twi-aotz--  900.00g  72.10  99.85 /dev/sdc3(0)
  thinpool_tmeta   vg0 ewi-aotz--    8.00g                 /dev/sdc3(230400)
  thinpool_tdata   vg0 ewi-aotz--  900.00g                 /dev/sdc3(4096)

What it means: Data is 72%, metadata is basically full. Creation will fail even though “space” looks available in a casual glance.

Decision: Extend thinpool metadata (and consider monitoring). Don’t keep retrying VM creates; you’ll just churn logs.

Task 12: Check ZFS pool health and quota constraints

cr0x@server:~$ zpool status -x
all pools are healthy

cr0x@server:~$ zfs list -o name,used,avail,refer,mountpoint
NAME              USED  AVAIL  REFER  MOUNTPOINT
rpool             120G   80G    96K   /rpool
rpool/data        110G   80G    96K   /rpool/data
rpool/data/vm-101-disk-0  50G  0B     50G   -

cr0x@server:~$ zfs get -H -o property,value quota,refquota rpool/data
quota    none
refquota none

What it means: The specific zvol shows 0B available because it’s a volume and this display can be misleading. The more important signal is pool/dataset availability and quotas/reservations. If AVAIL is low or quotas exist, zvol creation can fail.

Decision: If pool is healthy but AVAIL is low or quotas exist, free space or adjust quota/refquota/reservation. Also verify ashift and fragmentation if you’re constantly riding the edge.

Task 13: Verify Proxmox storage content types (a sneaky “permission-like” failure)

cr0x@server:~$ grep -A4 -n '^dir: fastssd' /etc/pve/storage.cfg
12:dir: fastssd
13:        path /mnt/pve/fastssd
14:        content vztmpl,backup
15:        prune-backups keep-last=3

What it means: This storage is configured for templates and backups only, not VM images. Creating a VM disk there will fail, sometimes with confusing messaging depending on the workflow.

Decision: Add images content type (and possibly iso) if this storage is intended for VM disks.

Task 14: Validate that the storage path exists on this node (cluster drift)

cr0x@server:~$ ls -ld /mnt/pve/fastssd
ls: cannot access '/mnt/pve/fastssd': No such file or directory

cr0x@server:~$ pvecm status | sed -n '1,20p'
Cluster information
-------------------
Name:             prod-cluster
Config Version:   42
Transport:        knet
Secure auth:      on

What it means: In a cluster, /etc/pve/storage.cfg is shared, but mounts are not magically shared. A node can have the storage configured but not actually present.

Decision: Create/mount the path consistently across nodes, or restrict storage to the nodes that have it.

Task 15: Check for AppArmor blocking QEMU on odd mountpoints

cr0x@server:~$ dmesg | tail -n 20
[12345.678901] audit: type=1400 audit(1735212345.123:456): apparmor="DENIED" operation="open" profile="libvirt-2d7c3a1e-..." name="/mnt/pve/fastssd/images/101/vm-101-disk-0.qcow2" pid=3210 comm="qemu-img" requested_mask="wc" denied_mask="wc" fsuid=0 ouid=0

cr0x@server:~$ aa-status | sed -n '1,20p'
apparmor module is loaded.
45 profiles are loaded.

What it means: Unix permissions can be perfect and you’ll still get “permission denied” if AppArmor blocks access. The dmesg line is the smoking gun.

Decision: Adjust the relevant profile or ensure you’re using expected storage paths. “Disable AppArmor” is a blunt instrument; use it only as a short diagnostic step, not a lifestyle.

Task 16: Verify CIFS/SMB mounts aren’t lying about permissions

cr0x@server:~$ findmnt -T /mnt/pve/smb-store -o TARGET,SOURCE,FSTYPE,OPTIONS
TARGET             SOURCE                       FSTYPE OPTIONS
/mnt/pve/smb-store  //nas01/proxmox             cifs   rw,relatime,vers=3.0,uid=0,gid=0,dir_mode=0755,file_mode=0644,nounix

cr0x@server:~$ sudo -i
root@server:~# touch /mnt/pve/smb-store/.smb-test && echo OK
touch: cannot touch '/mnt/pve/smb-store/.smb-test': Permission denied

What it means: CIFS can present permissions that look permissive locally but are rejected by the server. The server-side share ACL or user mapping is what matters.

Decision: Fix share permissions and credentials; consider NFS for VM images if you want fewer semantic surprises.

Storage-type gotchas: Directory, ZFS, LVM-thin, NFS, CIFS

Directory storage (path-backed): the simplest, and therefore the easiest to mis-mount

Directory storage is literally “write files here.” That means qemu-img creates qcow2/raw files under path/images/<vmid>. Failures come from:

  • Mount missing; mountpoint exists as an empty directory
  • Filesystem read-only after errors
  • No space left / inode exhaustion
  • Permissions/ACL mismatch on the VMID directory or its parents
  • AppArmor denies access to non-standard mountpoints

Opinionated take: for directory storage, use a dedicated mountpoint under /mnt/pve/<storageid> and ensure it’s mounted by systemd, not “someone runs mount after reboot.” Humans are not reliable boot processes.

ZFS zvol storage: fast, consistent, and allergic to bad capacity habits

ZFS-backed VM disks are often zvols. Creation failures show up when:

  • Pool has low available space (ZFS wants breathing room; running at 95% is an invitation to weird latency)
  • Dataset quotas/refquotas/reservations prevent new volumes
  • Pool is degraded or suspended (I/O errors propagate strangely)
  • Name conflicts (a stale zvol exists from a previous failed create)

Practical advice: treat ZFS pools like living systems. Monitor space, scrub regularly, and don’t operate them like “just another ext4 disk.”

LVM-thin: when metadata is your real disk

LVM-thin is good, until it isn’t. The “Could not create” failure is classic when the thin pool metadata is full. Data can be 40% used, metadata at 100%, and you get hard failures.

Also watch for:

  • Thin pool in read-only mode due to errors
  • VG free space insufficient to extend metadata
  • Discard/TRIM settings interacting with underlying storage

Do not “optimize” by making metadata tiny. That’s like buying a warehouse and then building a one-drawer filing cabinet for inventory. It works until you store more than three things.

NFS: permissions theatre, starring root_squash

NFS is common for shared Proxmox storage, and it’s also a constant source of “Permission denied.” Root squashing, ACL mismatches, or exports mounted on one node but not the other are the big ones.

If you use NFS for VM images, make sure:

  • Export is designed for it (low latency, correct sync behavior)
  • UID/GID mapping is deliberate and documented
  • Mount options are consistent across nodes

CIFS/SMB: fine for ISO and backups; VM images are a gamble

You can make SMB work, but semantics differ. Locking, oplocks, permission mapping, and “looks writable but isn’t” are regular features. If you must, test qemu-img create operations under load, not just a touch.

Permissions model in Proxmox: who writes what, where

When you click buttons in the UI, the request goes through Proxmox daemons that run as root. The disk creation step is usually performed by root on the host. That means:

  • On local filesystems, Unix permissions typically aren’t the issue unless you’ve tightened them aggressively or used ACLs.
  • On NFS, root may not be root on the server. Root_squash turns “root” into an anonymous UID, which then can’t create files in directories owned by real users.
  • On CIFS, root can be mapped to a share user that may not have write permission.
  • On clustered setups, configuration is shared but mounts aren’t. Node A can write; node B can’t; the GUI looks the same until it doesn’t.

One practical rule: always test writes from the host to the exact directory Proxmox uses. If touch fails, qemu-img will fail. If touch works but qemu-img fails, you’re now in “AppArmor / content type / feature support / plugin specifics” territory.

Joke #2: “Permission denied” is the system’s way of saying: “I heard your request and I’m choosing violence.”

Common mistakes: symptom → root cause → fix

1) Symptom: “No such file or directory” for a path under /mnt/pve/…

Root cause: The mountpoint path doesn’t exist on this node, or the storage isn’t mounted, or the directory tree (images/<vmid>) wasn’t created due to previous failures.

Fix: Create the mountpoint directory, ensure the filesystem is mounted at boot, and verify with findmnt -T. Then retry create.

2) Symptom: Proxmox shows storage “active,” but create fails instantly

Root cause: Storage status checks can be shallow; a stale mount or underlying directory can still appear “active.”

Fix: Validate the mount is real and writable: findmnt + touch test on the exact path.

3) Symptom: “Permission denied” on NFS, even as root

Root cause: NFS root_squash or anonymous UID mapping; export permissions don’t allow the mapped UID to write.

Fix: Align ownership on the server to the mapped UID/GID, or adjust export options and security model. Verify with stat showing UID/GID and a write test.

4) Symptom: “Read-only file system” suddenly on local ext4

Root cause: ext4 remounted read-only after detecting filesystem errors (often from disk issues or unsafe shutdown).

Fix: Check logs, schedule downtime, run fsck on the block device, fix underlying storage hardware issues, then remount rw.

5) Symptom: Plenty of “free space,” still can’t create on LVM-thin

Root cause: Thin pool metadata is full, or thin pool is in error state.

Fix: Check lvs metadata_percent. Extend metadata and consider enabling monitoring/autoextend.

6) Symptom: Create fails on one cluster node but works on another

Root cause: Mount or credentials differ per node; storage config is shared but mounts and secrets might not be.

Fix: Standardize mounts and options across nodes. For NFS/CIFS, ensure identical mount units and credential files.

7) Symptom: “Permission denied,” but Unix perms look correct

Root cause: AppArmor denial, SELinux (less common on default Proxmox), or ACLs.

Fix: Inspect dmesg for AppArmor denies; adjust profile or storage path. Check getfacl.

8) Symptom: Fails only for qcow2, raw works (or vice versa)

Root cause: Filesystem features or mount options interfering (e.g., CIFS “nobrl”/locking behaviors), or toolchain expectations.

Fix: Prefer raw on block devices (zvol/LV). For network shares, test under actual workload; reconsider storage choice for VM images.

9) Symptom: “Could not create” when selecting a storage that is “backup-only”

Root cause: Storage content types don’t include images.

Fix: Update /etc/pve/storage.cfg to include images for that storage (or pick the correct storage).

10) Symptom: “No space left on device” but df shows space

Root cause: Inodes full, quota hit, project quota hit, thin metadata full, or ZFS quota/refquota/reservation.

Fix: Check the relevant counter for the storage type: df -i, ZFS quota, LVM metadata, quota tools.

Three corporate mini-stories (how this fails in real life)

Mini-story 1: The wrong assumption (active storage ≠ mounted storage)

The environment looked tidy: a small Proxmox cluster, a “fastssd” directory storage, and a handful of new application VMs needed before lunch. The engineer creating them saw fastssd marked “active” in the UI and assumed it was mounted and healthy. Reasonable assumption. Wrong assumption.

One node had rebooted the night before after a kernel update. The mount unit didn’t come back because it depended on a device path that changed. The mountpoint directory still existed, so everything looked normal. Proxmox happily tried to create disk images under /mnt/pve/fastssd—which was now just a directory on the root filesystem.

Creation failed with qemu-img: Could not create intermittently as the root filesystem filled. Meanwhile, other services started failing in unrelated ways: package updates died, logs couldn’t flush, and suddenly the incident channel had three different “root causes” proposed.

The fix was painfully simple: correct the mount unit to use a stable identifier, mount the filesystem, and clean up the stray partial images that were written to the wrong place. The real improvement was procedural: after that incident, they made “findmnt -T the target path” part of every storage triage. It prevented a repeat within weeks—because the same failure mode tried again.

Mini-story 2: The optimization that backfired (tiny thin metadata)

A different team wanted more usable space from their LVM-thin pool. They read just enough to be dangerous and decided metadata was “overhead.” So they rebuilt the thin pool with a smaller metadata volume. On day one, everything looked great: more gigabytes available, less “waste,” bonus points in the weekly infrastructure meeting.

Months later, the platform team started seeing sporadic VM provisioning failures. It wasn’t consistent. It wasn’t predictable. And the error was the usual insultingly vague “qemu-img could not create” path when creating disks on the thin pool.

Data usage was fine. The pool wasn’t full. But metadata was. Snapshots, clones, and churn had eaten it. When metadata hits the ceiling, thin provisioning stops being thin and starts being brittle. The systems didn’t degrade gracefully; they just stopped creating new volumes.

They fixed it by extending metadata and enabling monitoring. But the real lesson was cultural: optimizing for a single visible metric (free data space) while ignoring metadata was a classic “spreadsheet win, production loss.” Afterward they standardized pool creation sizes and alerted on metadata_percent. It wasn’t exciting, but it was stable.

Mini-story 3: The boring practice that saved the day (write tests and uniform mounts)

One enterprise environment had a rule that sounded almost childish: every node must pass a standard storage “smoke test” after reboot. It checked that each Proxmox storage mount existed, was mounted, and was writable. It even created and deleted a small file in the exact directory Proxmox uses.

During a routine maintenance window, an NFS server was upgraded. One export came back with a modified policy: root was squashed where it previously wasn’t. From a security perspective, that change was defensible. Operationally, it meant Proxmox could no longer create VM disks on that export.

The post-reboot smoke test failed immediately on multiple nodes, before anyone tried to create a VM. The team didn’t discover the issue during a customer-visible workflow. They discovered it in a controlled check, then fixed the export ownership mapping and documented it.

No drama. No late-night pages. No “it worked yesterday.” Just a boring check preventing an exciting incident. In production, boring is the premium product.

Checklists / step-by-step plan (do this, not magic)

Step-by-step: when you hit “qemu-img: Could not create”

  1. Capture the exact failing path/device from the Proxmox task log. Don’t paraphrase it.
  2. Identify the storage type: Directory, ZFS, LVM-thin, NFS, CIFS, Ceph. Use pvesm status and /etc/pve/storage.cfg.
  3. Mount sanity (directory/NFS/CIFS):
    • findmnt -T <path> must show a source and expected fstype.
    • mount must show it as rw (unless intentionally ro).
  4. Write test on the exact directory:
    • touch then delete. If it fails, stop and fix that first.
  5. Capacity sanity:
    • Directory: df -h and df -i
    • LVM-thin: lvs data_percent + metadata_percent
    • ZFS: zfs list and zfs get quota,refquota,reservation
  6. Permissions sanity:
    • namei -l on the full path
    • getfacl if ACLs are involved
    • On NFS, confirm UID mapping with stat
  7. Security layer sanity:
    • dmesg for AppArmor denies
    • aa-status to confirm profiles are active
  8. Proxmox config sanity:
    • Storage content types include images
    • Storage is enabled on the node that is creating the VM
  9. Retry once after fixing the root cause. If it fails again, capture logs and escalate methodically. No click-spamming.

Operational checklist: prevent repeats

  • Mounts defined with stable identifiers (UUIDs or /dev/disk/by-id), not volatile /dev/sdX paths.
  • Systemd mount units or fstab entries tested across reboot.
  • Alerting on: filesystem read-only events, low space, inode exhaustion, LVM thin metadata, ZFS pool capacity.
  • Consistent mount options across cluster nodes for shared storage.
  • A tiny post-reboot storage smoke test (mount + write + delete) for every Proxmox storage path.
  • Documented NFS export policies: root_squash, anonuid/anongid, ownership model.

FAQ

1) Why does Proxmox say “qemu-img could not create” when the real issue is NFS permissions?

Because qemu-img is the tool attempting the create. NFS rejects the write, qemu-img reports the failure. Always correlate with mount options and UID mapping.

2) Storage shows “active” in Proxmox—doesn’t that mean it’s mounted and writable?

No. It usually means Proxmox’s storage plugin can see the path or the backend responds. A missing mountpoint directory can still “exist” and fool shallow checks.

3) What’s the fastest way to detect a missing mount vs a permission problem?

findmnt -T <path> plus a touch <path>/.test. If findmnt is empty, it’s a mount problem. If touch fails, read the exact error.

4) Why can root get “Permission denied” on NFS?

Because on the server, root may be mapped to an anonymous UID (root_squash). That UID might not own the directory and might not have write permission.

5) Can I fix this by chmod 777 on the storage path?

Sometimes it “works,” and that’s exactly why it’s dangerous: it hides identity/mapping issues and expands blast radius. Fix ownership, ACLs, or export policy instead.

6) What if the error happens only on one node in a cluster?

Assume node-specific mounts or credentials. Validate the same storage path on that node with findmnt and write tests. Cluster config is shared; node mounts aren’t.

7) How do I tell if it’s LVM-thin metadata and not disk space?

Check lvs and look at metadata_percent. If it’s near 100%, you’ve found your villain.

8) When should I suspect AppArmor?

When permissions and write tests work, but qemu-img still fails—or when dmesg shows AppArmor “DENIED” lines referencing the disk path.

9) Is CIFS/SMB a good idea for VM disks?

It can work, but it’s less predictable than NFS for VM images due to permission and locking semantics. Many teams use SMB for ISOs/backups and keep VM disks on block or NFS.

10) What’s the cleanest way to prevent “missing mount writes to root filesystem”?

Use systemd automounts or strict mount units and consider making the mountpoint non-writable until mounted. At minimum, alert on unexpected growth in the root filesystem.

Conclusion: next steps that prevent the rerun

The “qemu-img: Could not create” error isn’t mysterious. It’s repetitive. That’s good news: repetitive problems get standardized fixes.

Do this next, in order:

  1. Take one failing path from the task log and verify it with findmnt and a write test.
  2. Check capacity the right way for your storage type (blocks, inodes, thin metadata, ZFS quotas).
  3. Fix identity and permissions with intent: NFS root_squash mapping, CIFS share ACLs, local ACLs, and AppArmor profiles.
  4. Implement the boring prevention: post-reboot storage smoke tests and alerts on read-only remounts and metadata exhaustion.

Once you’ve done that, the next time Proxmox throws “Could not create,” you’ll spend five minutes diagnosing it instead of an hour negotiating with your own assumptions.

← Previous
Ubuntu 24.04: IPv6 firewall forgotten — close the real hole (not just IPv4) (case #72)
Next →
Email deliverability: Messages land in spam — the real fixes (SPF/DKIM/DMARC done right)

Leave a comment