Windows Setup is running, the VM boots fine from the ISO, and then: “Where do you want to install Windows?” shows an empty list. Or your already-installed Windows VM suddenly reports no boot device after you “just” switched to VirtIO because someone said it’s faster. This is the part where people panic, reboot three times, and start blaming ZFS, the RAID card, or the moon.
The truth is usually boring: Windows can’t talk to the virtual storage controller you presented. You fix it by presenting the right controller and installing the right VirtIO driver at the right time, in the right Windows environment (Setup vs installed OS). Do it cleanly and you’ll get fast storage, fewer weird interrupts, and predictable behavior under load.
Fast diagnosis playbook
If you want the 80/20, here it is. Run this top-down. Stop as soon as you find the mismatch.
First: are you in Windows Setup or inside an installed Windows?
- Windows Setup (installer): you must click “Load driver” and point it at the VirtIO ISO, or use a controller Windows already knows (SATA) temporarily.
- Already installed Windows: you need the VirtIO drivers installed before switching the disk/controller, or you risk INACCESSIBLE_BOOT_DEVICE.
Second: what controller did you choose in Proxmox?
- VirtIO SCSI (recommended) using
virtio-scsi-singlein Proxmox: needs thevioscsidriver. - VirtIO Block: needs the
viostordriver. - SATA: Windows sees it without VirtIO drivers; it’s slower and less “cloud-native,” but great as a temporary bootstrap.
Third: is the VirtIO ISO mounted and visible to the VM?
- Mount
virtio-win.isoas a second CD/DVD drive in Proxmox. - In Setup’s “Load driver,” browse to the correct folder for your Windows version (usually
\vioscsi\w11\amd64,\vioscsi\w10\amd64, or Server equivalents).
Fourth: secure boot and driver signing gotchas
- Windows 11 + Secure Boot: current VirtIO drivers are signed, but older ISOs can bite you. Use a recent VirtIO ISO and avoid “random ISO you found in a dusty share.”
- If you’re using UEFI/OVMF, keep it consistent. Switching BIOS type mid-flight changes boot mechanics.
Fifth: confirm Proxmox is actually presenting a disk
- Yes, it sounds obvious. Still check the VM config and Proxmox storage state. A missing or locked volume looks exactly like “no disk” from the guest perspective.
Joke #1: The Windows installer is like a bouncer: if your storage driver isn’t on the list, your disk isn’t getting in.
Interesting facts and context (why this keeps happening)
- VirtIO started as a KVM/QEMU-era optimization: it’s a paravirtualized device model designed to avoid heavy emulation overhead.
- Windows doesn’t ship VirtIO storage drivers out of the box, which is why Linux “just works” and Windows does not.
- IDE is still around mostly for compatibility theater: it’s slow, limited, and useful mainly to boot ancient installers or as a fallback CD-ROM bus.
- VirtIO SCSI exists because “one ring to rule them all” wasn’t enough: it provides a SCSI abstraction on top of VirtIO and supports features like TRIM/UNMAP in virtualized stacks when configured correctly.
- “VirtIO Block” vs “VirtIO SCSI” is not just naming: they have different Windows drivers (
viostorvsvioscsi) and different operational behaviors under some workloads. - Microsoft’s inbox drivers favor broad compatibility, which is why SATA/AHCI works everywhere and VirtIO does not.
- Windows Setup is its own tiny OS environment: loading a driver in Setup doesn’t automatically mean your installed Windows has it staged correctly (and vice versa).
- Controller changes are “boot-critical” changes on Windows: storage drivers must be present and set to start early, or you’ll get boot failures.
- Proxmox defaults have improved over time, but a lot of blog posts are stuck in older recommendations (like using legacy models or odd BIOS settings).
What “can’t see the disk” actually means on Proxmox
“Windows can’t see the disk” is a symptom. Under the hood, one of these is happening:
- No disk is actually attached (VM config issue, storage issue, wrong VM ID, deleted volume, locked volume, or the disk is attached on a bus your firmware isn’t booting from).
- A disk is attached, but Windows has no driver for the controller (classic VirtIO situation during Setup).
- A disk is attached and a driver exists, but Windows can’t use it yet (driver not loaded in WinPE/Setup, incorrect architecture folder, driver signing issue).
- The disk is visible but not usable (offline disk policy, GPT/MBR mismatch under some boot settings, storage spaces weirdness, or stale metadata).
- The disk is visible but performance is catastrophic (wrong cache mode, no I/O thread, QEMU config mismatch, or host storage saturation).
So the plan is simple: prove what Proxmox is presenting, prove what Windows is loading, then join the two with the correct VirtIO package.
Pick the right virtual storage model (and stop improvising)
Here’s what I recommend for most production Windows VMs on Proxmox:
- Bus/controller: VirtIO SCSI with
virtio-scsi-single(Proxmox GUI: SCSI Controller → “VirtIO SCSI single”). - Disk type: SCSI disk (e.g.,
scsi0). - Cache: Usually “Write back” only if you understand the risk; otherwise default “None” (which is typically safest). If you’re on ZFS, be especially cautious with double-caching and failure semantics.
- IO thread: Enable I/O thread for busy disks. It often reduces latency spikes under mixed load.
- Trim/Discard: Enable discard for thin-provisioned storage or SSD-backed pools when you want space reclamation (and have verified the underlying storage supports it sensibly).
When would I not do VirtIO SCSI?
- Installer bootstrap: If you can’t be bothered to load drivers during Setup, use SATA for the install, then switch later (but do it safely).
- Very old Windows versions: Driver support exists, but the operational risk increases. If you’re running something old enough to be called “legacy,” you’re already making choices.
VirtIO Block can be fine, but VirtIO SCSI tends to be the “default good” in Proxmox land because it plays nicely with multiple disks, queueing, and common operational patterns.
New install: load VirtIO storage drivers during Windows Setup
This is the cleanest path: present a VirtIO-backed disk from the start, then load the driver in Setup so Windows can see it and install properly.
Step 1: Attach both ISOs
Mount your Windows ISO as CD/DVD drive 1. Mount virtio-win.iso as CD/DVD drive 2. If you only mount VirtIO and forget Windows, you’ll have a quiet moment to reflect on life choices.
Step 2: Use VirtIO SCSI single and a SCSI disk
In Proxmox VM hardware:
- SCSI Controller: VirtIO SCSI single
- Hard Disk: SCSI (scsi0)
Step 3: In Windows Setup, click “Load driver”
When you reach disk selection and see nothing:
- Click Load driver.
- Browse to the VirtIO CD drive.
- Pick the driver folder that matches your OS and architecture. Examples:
\vioscsi\w11\amd64\vioscsi\w10\amd64\vioscsi\2k22\amd64(for Windows Server 2022)
- Select the driver and continue. Your disk should appear.
Step 4: Install additional VirtIO drivers after Windows boots
Storage is the gating item, but don’t stop there. Install:
- Network (NetKVM) if you used VirtIO NIC
- Balloon driver if you use dynamic memory concepts (some folks do, some don’t)
- QEMU Guest Agent for clean shutdowns, IP reporting, and better orchestration
Existing Windows VM: safely move from SATA/IDE to VirtIO/Scsi
This is where people break things. The failure mode is predictable: Windows boots fine on SATA, you switch the disk to VirtIO SCSI, Windows bluescreens with INACCESSIBLE_BOOT_DEVICE because the boot-critical driver isn’t installed/active.
The safe approach is equally predictable: make Windows learn the new controller while still booted on the old one.
Method A (recommended): add a second VirtIO disk, install drivers, then migrate
- Keep your existing boot disk on SATA (or whatever currently works).
- Add a second disk on VirtIO SCSI (scsi1) or VirtIO Block, whichever you plan to use.
- Boot Windows. It will detect new hardware and you can point it to VirtIO drivers (or install the VirtIO driver package).
- Confirm the driver is installed and the new disk is visible in Disk Management.
- Then migrate the boot disk/controller, or clone data as needed.
Method B: stage the driver and flip the controller (fast, riskier)
If you must do it in one shot:
- Mount the VirtIO ISO in the VM.
- Install the VirtIO driver package inside Windows.
- Shut down, change the disk bus/controller in Proxmox, boot, and pray a little less than usual.
This works often. “Often” is not a reliability strategy. Use Method A when you’re responsible for uptime.
Joke #2: Changing Windows storage controllers without staging drivers is like swapping a car’s steering wheel at highway speed—technically possible, socially discouraged.
Practical tasks with commands (and decisions)
These are real operational tasks you can run on the Proxmox host. Each includes what the output tells you and what decision you make next. If you do these in order, you’ll usually solve the problem before your coffee cools.
Task 1: Confirm the VM hardware config (what disk bus did you actually present?)
cr0x@server:~$ qm config 104
boot: order=scsi0;ide2;net0
cores: 4
memory: 8192
name: win11-prod
net0: virtio=DE:AD:BE:EF:10:40,bridge=vmbr0
scsihw: virtio-scsi-single
scsi0: rpool:vm-104-disk-0,discard=on,iothread=1,size=120G
ide2: local:iso/Win11_23H2.iso,media=cdrom
ide3: local:iso/virtio-win.iso,media=cdrom
What it means: This VM uses VirtIO SCSI single and the boot disk is on scsi0. Windows Setup will need vioscsi.
Decision: If disk is virtio0, plan to load viostor instead. If disk is sata0, you shouldn’t need VirtIO drivers to see it.
Task 2: Confirm the disk volume exists on the Proxmox storage
cr0x@server:~$ pvesm list rpool | grep vm-104-disk-0
rpool:vm-104-disk-0 raw 128849018880 0
What it means: The volume exists and is visible to Proxmox.
Decision: If it’s missing, you’re not dealing with VirtIO drivers—you’re dealing with storage mapping, deletion, or the wrong VM ID.
Task 3: Check if a backup snapshot or lock is blocking disk operations
cr0x@server:~$ qm status 104
status: stopped
What it means: VM is stopped; no active lock indicated here.
Decision: If you see a lock in the GUI or tasks log, clear the root cause before you expect consistent disk attachment.
Task 4: Verify the VirtIO ISO is present on the host
cr0x@server:~$ ls -lh /var/lib/vz/template/iso/virtio-win.iso
-rw-r--r-- 1 root root 705M Dec 2 11:40 /var/lib/vz/template/iso/virtio-win.iso
What it means: The ISO exists on local storage in the standard Proxmox ISO path.
Decision: If it’s missing, upload it properly; don’t mount a random file and call it a day.
Task 5: Confirm the VM is actually mounting the VirtIO ISO as a CD-ROM
cr0x@server:~$ qm config 104 | egrep 'ide2|ide3|sata2'
ide2: local:iso/Win11_23H2.iso,media=cdrom
ide3: local:iso/virtio-win.iso,media=cdrom
What it means: VirtIO ISO is attached as a CD-ROM device.
Decision: If it’s not attached, Windows Setup will never find the driver. Attach it, reboot the VM into Setup, and try again.
Task 6: Check QEMU process parameters for the VM (did Proxmox apply what you think?)
cr0x@server:~$ ps -ef | grep -E 'kvm.*-id 104' | head -n 1
root 22188 1 12 11:52 ? 00:01:44 /usr/bin/kvm -id 104 -name win11-prod -machine type=pc-q35-8.1 ... -device virtio-scsi-pci,id=scsihw0 ...
What it means: QEMU is running with a VirtIO SCSI device.
Decision: If you expected SATA but see VirtIO devices, your config is not what you think—fix the mismatch before touching Windows.
Task 7: Check host kernel logs for disk/storage errors around VM start
cr0x@server:~$ journalctl -k --since "30 min ago" | egrep -i 'zfs|scsi|blk|i/o error|qemu' | tail -n 15
Dec 26 11:51:10 server kernel: zfs: spa_sync: doing sync pass
Dec 26 11:51:12 server kernel: blk_update_request: I/O error, dev zd16, sector 123456 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Dec 26 11:51:12 server kernel: Buffer I/O error on dev zd16, logical block 15432, async page read
What it means: The host is throwing I/O errors. This can surface in the guest as “no disk” or corrupted installs.
Decision: Stop blaming VirtIO drivers and investigate host storage health (pool status, underlying disks, controller issues) before continuing.
Task 8: Check ZFS pool health (if you’re on ZFS)
cr0x@server:~$ zpool status -x
all pools are healthy
What it means: No known ZFS issues.
Decision: If it reports degraded/faulted, fix that first. Windows Setup failing to see a disk is not always a driver story.
Task 9: Confirm the VM’s firmware/boot mode (UEFI vs SeaBIOS)
cr0x@server:~$ qm config 104 | egrep 'bios|efidisk0|machine'
bios: ovmf
efidisk0: rpool:vm-104-disk-1,size=4M
machine: q35
What it means: The VM uses UEFI (OVMF) with an EFI disk and Q35 machine type.
Decision: Keep it consistent. If you installed Windows under UEFI, don’t flip to SeaBIOS later unless you enjoy bootloader archaeology.
Task 10: Validate NIC model (because you’ll need network later to finish setup)
cr0x@server:~$ qm config 104 | grep '^net0'
net0: virtio=DE:AD:BE:EF:10:40,bridge=vmbr0
What it means: VirtIO NIC is used. Windows Setup may not have a network driver either.
Decision: If you need network during Setup (for domain join, drivers, updates), be ready to load NetKVM from the VirtIO ISO too—or temporarily use an emulated Intel E1000 NIC.
Task 11: Check Proxmox task logs for recent hardware changes
cr0x@server:~$ tail -n 30 /var/log/pve/tasks/index
UPID:server:00005A1C:0002A9F4:676D8B6E:qmset:104:root@pam:
UPID:server:00005A9F:0002B0A1:676D8BAA:qmstart:104:root@pam:
UPID:server:00005B10:0002B733:676D8BE2:qmstop:104:root@pam:
What it means: Someone changed VM settings and started/stopped it.
Decision: If disk bus/controller was changed recently, assume driver mismatch first. Roll back or apply the correct migration procedure.
Task 12: Inspect the effective disk mapping lines (SCSI vs VirtIO vs SATA)
cr0x@server:~$ qm config 104 | egrep '^(scsi|virtio|sata|ide)[0-9]+:'
scsi0: rpool:vm-104-disk-0,discard=on,iothread=1,size=120G
ide2: local:iso/Win11_23H2.iso,media=cdrom
ide3: local:iso/virtio-win.iso,media=cdrom
What it means: The only hard disk is VirtIO SCSI-backed (scsi0). If Windows can’t see it, you’re missing vioscsi in Setup.
Decision: Don’t change five things at once. Fix driver loading first.
Task 13: If you suspect a “disk not attached” bug, compare running config vs disk files on storage
cr0x@server:~$ ls -lh /rpool/images/104/
total 121G
-rw-r----- 1 root root 120G Dec 26 11:50 vm-104-disk-0
-rw-r----- 1 root root 4.0M Dec 26 11:45 vm-104-disk-1
What it means: Disk files exist where expected (example for a directory-like ZFS dataset mount). Your Proxmox mapping is probably fine.
Decision: If the disk file is missing, don’t keep reinstalling Windows. Restore from backup or fix storage mapping.
Task 14: Check for host CPU virtualization flags and KVM health (rare, but eliminates a class of weirdness)
cr0x@server:~$ egrep -m 1 '(vmx|svm)' /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ...
What it means: Hardware virtualization extensions are present.
Decision: If KVM is misconfigured, you’ll see broader VM problems, not just “no disk.” This is a sanity check, not your primary suspect.
Common mistakes: symptom → root cause → fix
This section is the stuff that burns hours in real environments because everyone assumes the wrong layer.
1) Symptom: Windows Setup shows no disks at all
- Root cause: VirtIO storage controller presented, but no VirtIO storage driver loaded in Setup.
- Fix: Mount VirtIO ISO, click “Load driver,” choose
vioscsifor VirtIO SCSI orviostorfor VirtIO Block, matching your Windows version andamd64.
2) Symptom: “No device drivers were found” when browsing VirtIO ISO
- Root cause: Wrong folder (wrong OS version folder), wrong architecture, or you’re using VirtIO Block but browsing vioscsi (or vice versa).
- Fix: Re-check Proxmox disk bus. Then select the correct driver path. If in doubt: VirtIO SCSI →
\vioscsi\...\amd64; VirtIO Block →\viostor\...\amd64.
3) Symptom: Existing Windows VM bluescreens with INACCESSIBLE_BOOT_DEVICE after changing controller
- Root cause: Storage driver not installed/boot-start prior to controller change.
- Fix: Roll back to the old controller to boot. Then stage drivers using the “add second VirtIO disk” method. Only then switch the boot disk bus.
4) Symptom: Disk appears in Setup after loading driver, but install fails or corrupts later
- Root cause: Host storage instability, I/O errors, or unsafe cache mode under power-loss conditions.
- Fix: Check host logs and pool health. Use conservative caching until you can prove durability end-to-end (and have a UPS and correct write barriers semantics in your stack).
5) Symptom: Windows installs, but no network during setup or after first boot
- Root cause: VirtIO NIC selected but NetKVM driver not installed.
- Fix: Load NetKVM driver from VirtIO ISO during Setup, or temporarily use an emulated NIC to get online and install VirtIO package later.
6) Symptom: Windows sees the disk, but performance is awful (high latency, low IOPS)
- Root cause: Wrong disk settings (no iothread, suboptimal cache mode), host contention, or storage backend saturated.
- Fix: Enable iothread for busy disks, review cache mode carefully, and measure host storage latency. Don’t “optimize” blind.
7) Symptom: After driver install, Windows still doesn’t see the disk until reboot
- Root cause: Driver loaded but device enumeration hasn’t refreshed in the environment.
- Fix: In Setup, rescan or go back/forward. In installed Windows, rescan disks in Disk Management or reboot. This is normal; don’t overreact.
Checklists / step-by-step plan
Checklist A: New Windows install on VirtIO SCSI (recommended path)
- Create VM with UEFI (OVMF) if you want modern Windows defaults; keep Q35 machine type consistent.
- Set SCSI Controller to VirtIO SCSI single.
- Add disk as SCSI (scsi0); enable iothread if you expect sustained I/O; consider discard if thin-provisioned/SSD-backed.
- Mount Windows ISO as CD-ROM.
- Mount VirtIO ISO as second CD-ROM.
- Boot Windows ISO, reach disk selection, click “Load driver.”
- Load
vioscsidriver matching OS version andamd64. - Confirm disk appears; install Windows.
- After first boot, install full VirtIO driver package (storage, net, balloon, qemu-ga) from VirtIO ISO.
- Reboot, then validate Device Manager has no unknown devices for storage/network.
Checklist B: Convert an existing Windows VM from SATA to VirtIO SCSI without downtime drama
- Snapshot or back up the VM. This is not optional if you’re paid to be responsible.
- Mount VirtIO ISO in the existing VM.
- Add a small test disk as SCSI under VirtIO SCSI single (scsi1) while keeping the boot disk unchanged.
- Boot Windows; install VirtIO drivers (or the VirtIO package). Confirm Windows detects the new disk and controller.
- Shut down the VM cleanly.
- Change the boot disk bus/controller to SCSI (VirtIO SCSI single).
- Boot and verify Windows comes up without storage-related bluescreens.
- Remove the test disk if it was only for driver staging.
Checklist C: If you are stuck mid-install and need a tactical workaround
- Switch the boot disk temporarily to SATA (AHCI) so Windows Setup sees it without drivers.
- Install Windows.
- Inside Windows, install VirtIO drivers from the VirtIO ISO.
- Use Method A migration to move to VirtIO SCSI later.
Three corporate mini-stories from the trenches
Mini-story 1: The incident caused by a wrong assumption
The org had a standard “golden template” for Windows VMs. Someone built it months ago, using SATA disks because it “worked everywhere.” Over time, the virtualization team modernized everything else: Q35, UEFI, VirtIO NICs. Storage stayed SATA because nobody wanted to touch it. Then a new engineer joined and did what new engineers do: “fix” the thing that looks obviously suboptimal.
They migrated several VMs from SATA to VirtIO SCSI, one after another, during a maintenance window. They assumed Windows would just detect the controller on boot and install drivers automatically, because that’s what Linux does. First VM bluescreened. Second VM bluescreened. At that point, the window turned into a fire drill and the “maintenance” channel started filling with creative language.
The root cause wasn’t exotic. The VirtIO driver wasn’t staged as a boot-start driver. Windows didn’t have a way to talk to the boot disk, so it couldn’t load the OS to install the driver needed to talk to the disk needed to load the OS. It’s a perfect little circle of doom.
Recovery was straightforward once someone took a breath: switch the disks back to SATA to boot, mount the VirtIO ISO, install the driver package, add a dummy VirtIO SCSI disk to force enumeration, confirm drivers, and then re-do the migration. But the downtime happened anyway, because the wrong assumption was made at the start: “a driver will magically appear at boot.”
The lasting fix was not a heroic script. It was a migration runbook with a single non-negotiable step: stage the driver while the VM is still bootable.
Mini-story 2: The optimization that backfired
A different shop was proud of their Proxmox cluster. Solid hardware, ZFS, and enough monitoring to wake the dead. They noticed disk benchmarks in a Windows VM were “lower than expected,” and someone decided to tune their way to glory. They changed cache mode to writeback, enabled discard everywhere, and turned on iothread for every disk, across a fleet, in a hurry.
Performance did improve—in the short term. Then a host reboot happened after an unrelated kernel update. A subset of VMs came back with file system errors and application-level corruption. The postmortem was painful because it wasn’t a single smoking gun; it was a stack of assumptions. Writeback caching can be safe if you understand where the write acknowledgment is coming from and what happens on power loss or abrupt host resets. In their environment, that safety story was not fully proven.
They had to roll back cache settings, restore some data from backups, and explain to management why “performance tuning” caused a reliability event. The hard lesson: storage performance tweaks are not free. If you can’t explain the durability model to a skeptical SRE, don’t ship it.
The end state was better: they kept iothread for specific high-I/O disks, used conservative caching by default, and only enabled aggressive caching where the app could tolerate risk and the infrastructure had the right power-loss protections.
Mini-story 3: The boring but correct practice that saved the day
One team ran a small private cloud with Proxmox and a mix of Linux and Windows. Nothing fancy. Their “boring practice” was a standardized VM build checklist: always attach VirtIO ISO, always install VirtIO package and QEMU guest agent, always verify Device Manager is clean, always snapshot before changing disk controllers.
Then an upgrade project arrived: move Windows VMs to VirtIO SCSI single for performance and operational consistency. It wasn’t glamorous. It was a lot of repeats: add a second VirtIO disk, boot, confirm driver, shut down, switch controller, boot, validate, remove staging disk.
Midway through the rollout, one VM refused to boot after the switch. No drama. They reverted using the snapshot, booted, discovered the VirtIO ISO mounted was an ancient build with driver signing issues under their Secure Boot settings, updated the ISO, staged drivers again, and completed the migration.
No incident ticket, no emergency call, no executive attention. Boring won. And in production systems, boring is often the highest compliment.
One operational quote (paraphrased idea)
Paraphrased idea (Werner Vogels): “Everything fails, all the time,” so engineer systems and procedures assuming failure is normal.
FAQ
1) Which VirtIO driver do I need: vioscsi or viostor?
VirtIO SCSI (Proxmox SCSI controller set to VirtIO SCSI / VirtIO SCSI single) typically needs vioscsi. VirtIO Block (virtio0 disk) needs viostor. Check qm config to confirm what you actually configured.
2) Why does Linux see the disk but Windows doesn’t?
Linux kernels generally include VirtIO drivers by default. Windows does not include VirtIO storage drivers in the installer environment, so you must load them from the VirtIO ISO.
3) Can I install Windows on SATA first and switch to VirtIO later?
Yes, and it’s a reasonable tactical workaround. Just don’t hot-swap the boot disk controller without staging the driver first. Add a second VirtIO disk, install drivers, confirm detection, then switch.
4) I loaded a driver in Windows Setup, but the disk still doesn’t appear. Now what?
Re-check the controller type (VirtIO SCSI vs VirtIO Block), the driver folder (correct Windows version + amd64), and confirm the VirtIO ISO is actually mounted. If host logs show I/O errors, stop and fix the host storage first.
5) Does Windows 11 Secure Boot block VirtIO drivers?
It can if you’re using an outdated VirtIO ISO with signing issues relative to your Secure Boot policy. Use a recent VirtIO ISO and keep VM firmware choices consistent (UEFI stays UEFI).
6) What’s the best disk controller choice for Windows on Proxmox?
For most production workloads: VirtIO SCSI single with SCSI disks. It’s a good balance of performance, features, and operational predictability.
7) I changed the controller and now the VM won’t boot. What’s the fastest recovery?
Switch the disk back to the old controller (SATA/IDE) so Windows boots. Then install VirtIO drivers properly (preferably by adding a secondary VirtIO disk). After drivers are confirmed, retry the controller change.
8) Do I need the QEMU Guest Agent for disk visibility?
No. It’s not a storage driver. But you still want it for clean shutdown, better orchestration, and fewer “why won’t this VM stop” moments.
9) Should I enable discard (TRIM) on the virtual disk?
If your backend benefits from space reclamation (thin provisioning, SSD pools) and you understand the storage stack behavior, yes. If you’re unsure, start conservative and measure. The wrong discard behavior won’t usually hide the disk, but it can create surprise performance patterns.
10) Why does the VirtIO ISO have so many folders?
Because Windows versions and signing requirements differ, and drivers are packaged per OS family and architecture. The installer needs the exact matching inf/sys/cat set. “Close enough” is not how Windows drivers work.
Conclusion: next steps that keep you out of trouble
If Windows can’t see your disk on Proxmox, treat it like a contract mismatch: Proxmox presented a controller; Windows needs the right driver to speak it. Fix the mismatch, don’t random-walk your settings.
- New installs: use VirtIO SCSI single from day one and load
vioscsiin Setup from the VirtIO ISO. - Existing installs: stage drivers first (add a secondary VirtIO disk), then switch the boot disk/controller.
- When it still doesn’t work: validate the host storage health and VM mapping with commands, not vibes.
Do this consistently and you’ll stop having “Windows can’t find the disk” as a recurring ritual. You’ll also sleep better, which is the real performance metric.