You can migrate a VM in an afternoon—or spend a weekend staring at a black screen that says “No bootable device.” ESXi to Proxmox migrations fail for the same boring reasons every time: disk controllers change, firmware mode changes, NIC names change, and Windows reacts like you swapped its steering wheel for a baguette.
This is the production-minded way to do it: preserve evidence, choose the right bus types, pre-stage drivers, verify boot mode, and only then switch to performance settings like VirtIO. You’re here to move workloads, not collect postmortems.
Facts and history that actually matter
Some context makes the failure modes less mysterious and more predictable. Here are the bits I’ve seen pay off in real migrations:
- VMDK is older than most “cloud-first” strategies. VMware introduced VMDK in the early 2000s; it has sparse/thick variants and a grab bag of subtypes that conversion tools don’t always agree on.
- KVM has been in the Linux kernel since 2007. Proxmox rides that foundation, which means Linux kernel changes (drivers, naming, initramfs) can be more relevant than “Proxmox settings.”
- VirtIO came out of the KVM/QEMU ecosystem to avoid hardware emulation overhead. Great for performance, terrible if your guest OS doesn’t have the driver at boot.
- UEFI vs BIOS isn’t a cosmetic toggle. If ESXi installed the guest in UEFI mode and you boot it in BIOS (or the reverse), you often get a “disk looks empty” situation.
- VMware’s paravirtual SCSI (PVSCSI) is not VirtIO. They both sound “paravirtual,” but the driver stacks are unrelated. Windows especially will punish assumptions here.
- NIC “predictable names” changed the human mental model. Linux moved from eth0/eth1 to names like ens18/enp0s3; migration makes that shift painfully visible.
- qcow2 is not “just another disk format.” It supports snapshots and compression but adds indirection. On some storage backends, raw is faster and simpler.
- OVA is a tarball, not magic. It usually contains an OVF descriptor plus one or more VMDKs. Treat it like an archive you can inspect, not a sacred artifact.
- Windows activation and hardware IDs have always been fickle. Moving hypervisors changes enough virtual hardware that you should plan for activation prompts and licensing compliance checks.
And one reliability quote, because it’s still the right mindset when you’re about to flip a production workload onto different virtual hardware:
Werner Vogels (paraphrased idea): “Everything fails, all the time—design and operate as if that’s the default.”
Decision tree: pick your import path
There are three common paths. Pick one, commit, and avoid mixing “quick hacks” mid-flight.
Path A: You have an OVA/OVF export
Best when you can export cleanly from vCenter/ESXi and you want a portable bundle. You’ll extract the VMDK(s) and convert with qemu-img or let Proxmox import the disk.
Path B: You have the VMDK files from a datastore
Best when you can SCP from ESXi or browse the datastore. You’ll copy VMDK(s) and convert/import.
Path C: You have neither, only a running VM you can’t power off
Then you’re doing replication, backup restore, or block-level mirroring. This guide assumes you can get a cold export or at least a consistent snapshot. If you can’t, your “migration” is a reliability event in disguise.
Prep on ESXi: export cleanly and capture the truth
Your goal on ESXi is simple: freeze the facts about the VM before you move it. What disk controller? BIOS or UEFI? Which NIC type? Which VLANs? Which IPs? Once you’re in Proxmox, those details become guesses, and guesses are how you end up with a VM that “boots fine” but can’t talk to anything.
Task 1: Record VM firmware, disks, and NIC types (from ESXi shell)
cr0x@server:~$ ssh root@esxi01 'vim-cmd vmsvc/getallvms | head'
Vmid Name File Guest OS Version Annotation
1 dc01 [datastore1] dc01/dc01.vmx windows9Server64Guest vmx-13
2 web01 [datastore1] web01/web01.vmx ubuntu64Guest vmx-14
What the output means: you have VMIDs and VMX paths. You’ll use the VMX to confirm firmware and devices.
Decision: identify the VMID you’re migrating and stash its VMX path in your notes.
Task 2: Pull the VMX and check firmware and device model
cr0x@server:~$ ssh root@esxi01 'grep -E "firmware|scsi|nvme|ethernet.*virtualDev" /vmfs/volumes/datastore1/web01/web01.vmx'
firmware = "efi"
scsi0.virtualDev = "pvscsi"
ethernet0.virtualDev = "vmxnet3"
What it means: UEFI guest, VMware paravirtual SCSI, and vmxnet3 NIC.
Decision: plan for Proxmox UEFI (OVMF) and understand the guest currently relies on PVSCSI and vmxnet3 drivers—not VirtIO.
Task 3: Shutdown cleanly (or take a snapshot you can justify)
cr0x@server:~$ ssh root@esxi01 'vim-cmd vmsvc/power.getstate 2'
Retrieved runtime info
Powered on
Decision: if you can schedule downtime, power off. If you can’t, take a quiesced snapshot only if your guest tools and application support it. Otherwise accept that you’re exporting “maybe-consistent” data.
cr0x@server:~$ ssh root@esxi01 'vim-cmd vmsvc/power.shutdown 2; sleep 5; vim-cmd vmsvc/power.getstate 2'
Retrieved runtime info
Powered off
What it means: clean shutdown succeeded. That’s the gold standard for disk conversion.
Task 4: Export/copy the disks (example: scp from ESXi)
cr0x@server:~$ scp root@esxi01:/vmfs/volumes/datastore1/web01/web01.vmdk .
web01.vmdk 100% 512B 2.1KB/s 00:00
What it means: that tiny file is the descriptor. The data is usually in -flat.vmdk or similar.
Decision: always copy the flat/extents too, or use an export method that produces a single stream (OVA) to avoid missing parts.
cr0x@server:~$ scp root@esxi01:/vmfs/volumes/datastore1/web01/web01-flat.vmdk .
web01-flat.vmdk 100% 40G 110.2MB/s 06:12
Prep on Proxmox: storage, networking, and guardrails
On the Proxmox side, the two big risks are: (1) dumping big raw images onto the wrong storage, then discovering you can’t snapshot/replicate the way you assumed, and (2) plugging the VM into the wrong bridge/VLAN and “successfully” isolating it from the world.
Task 5: Verify storage targets and free space
cr0x@server:~$ pvesm status
Name =reqtype Status Total Used Avail % Content
local dir active 98.0G 11.2G 81.6G 11 iso,vztmpl,backup
local-lvm lvmthin active 900.0G 612.4G 287.6G 68 rootdir,images
zfspool zfspool active 2.7T 1.1T 1.6T 40 images,rootdir
What it means: you have multiple destinations. ZFS pool has room and supports snapshots; LVM-thin also supports snapshots but behaves differently under pressure.
Decision: pick the storage backend intentionally. If you plan on heavy snapshots/replication, ZFS is often the least surprising.
Task 6: Confirm bridges and VLAN awareness
cr0x@server:~$ ip -br link
lo UNKNOWN 00:00:00:00:00:00
eno1 UP 3c:ec:ef:11:22:33
vmbr0 UP 3c:ec:ef:11:22:33
vmbr1 UP 3c:ec:ef:aa:bb:cc
Decision: match your ESXi portgroup/VLAN intent to a Proxmox bridge and tag. If you can’t describe the network in one sentence, stop and map it before importing.
Task 7: Check Proxmox host health before you blame the VM
cr0x@server:~$ pveversion -v
proxmox-ve: 8.2.2 (running kernel: 6.8.12-4-pve)
pve-manager: 8.2.4 (running version: 8.2.4/2a1b7d2c)
Decision: a modern kernel and QEMU reduce compatibility weirdness, especially with UEFI and storage. If you’re on something ancient, expect extra friction.
Joke #1: Virtualization is just pretending really hard, and somehow payroll depends on it.
Convert disks: VMDK/OVA to Proxmox storage
Conversion is where you can silently ruin your day. The rules are simple:
- Confirm you have the actual data extents (flat/sesparse/etc.).
- Convert using a tool that understands VMDK subformats.
- Pick raw for simplicity/performance on ZFS/LVM-thin unless you have a reason for qcow2.
- Validate the resulting image size and basic integrity before attaching it.
Task 8: Inspect the VMDK descriptor (catch multi-extent surprises)
cr0x@server:~$ head -n 20 web01.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=fffffffe
parentCID=ffffffff
createType="vmfs"
# Extent description
RW 83886080 VMFS "web01-flat.vmdk"
What it means: one extent file, 40GiB-ish sectors. Good. If you see multiple extents, you must copy all of them and keep relative paths intact.
Decision: if it’s multi-extent and you don’t have all parts, stop. Re-export as OVA or recopy properly.
Task 9: Convert VMDK to raw (preferred on ZFS for fewer layers)
cr0x@server:~$ qemu-img convert -p -f vmdk -O raw web01.vmdk web01.raw
(0.00/100%)
(12.50/100%)
(25.00/100%)
(37.50/100%)
(50.00/100%)
(62.50/100%)
(75.00/100%)
(87.50/100%)
(100.00/100%)
What it means: conversion completed. No errors. Good start, not proof.
Decision: if you’re targeting LVM-thin or ZFS, raw is fine. If you need file-based storage with snapshots and don’t have ZFS/LVM-thin, qcow2 can be useful.
Task 10: Verify the image info (sanity check before import)
cr0x@server:~$ qemu-img info web01.raw
image: web01.raw
file format: raw
virtual size: 40 GiB (42949672960 bytes)
disk size: 40 GiB
What it means: raw is fully allocated; that’s expected. If you expected thin provisioning, you’ll see it only with qcow2 or with the target storage (like ZVOL/LVM-thin) handling sparseness.
Decision: if disk size is wildly off (e.g., 4 GiB instead of 40 GiB), you converted the wrong thing or missed extents.
Task 11: Import the disk into Proxmox storage with qm importdisk
cr0x@server:~$ qm create 120 --name web01 --memory 4096 --cores 2 --net0 virtio,bridge=vmbr0
create VM 120: success
cr0x@server:~$ qm importdisk 120 web01.raw zfspool
importing disk 'web01.raw' to VM 120 ...
transferred 40.0 GiB of 40.0 GiB (100.00%)
Successfully imported disk as 'zfspool:vm-120-disk-0'
What it means: the disk is now a Proxmox-managed volume.
Decision: from here on, treat the imported volume as the source of truth; keep the original VMDK/raw somewhere safe until the VM is stable and backed up.
Create the VM in Proxmox: controllers, firmware, CPU, and RAM
This is where most “it boots on ESXi” assumptions go to die. ESXi virtual hardware defaults are not Proxmox defaults. Your job is to replicate what matters for boot, then improve performance after it’s stable.
Attach the imported disk correctly
After qm importdisk, you still need to attach the disk to a controller and set boot order.
Task 12: Attach disk as SATA first (stability-first approach)
cr0x@server:~$ qm set 120 --sata0 zfspool:vm-120-disk-0
update VM 120: -sata0 zfspool:vm-120-disk-0
What it means: the disk is attached to a SATA controller. Almost every OS can boot SATA without extra drivers.
Decision: use SATA (or IDE for ancient OS) for the first successful boot. Switch to VirtIO SCSI later for performance.
Task 13: Match firmware mode (UEFI vs BIOS)
cr0x@server:~$ qm set 120 --bios ovmf --efidisk0 zfspool:0,pre-enrolled-keys=1
update VM 120: -bios ovmf -efidisk0 zfspool:vm-120-disk-1,pre-enrolled-keys=1
What it means: you enabled OVMF (UEFI) and created an EFI vars disk. If the ESXi VMX said firmware="efi", this is the match.
Decision: if the guest was BIOS on ESXi, do not enable OVMF. If you don’t know, check the VMX and the partitioning (EFI System Partition vs MBR).
Task 14: Set boot order explicitly
cr0x@server:~$ qm set 120 --boot order=sata0
update VM 120: -boot order=sata0
Decision: don’t trust defaults. Make the VM boot the disk you imported, not an empty virtual CD-ROM.
Task 15: Start and watch console output (don’t do blind reboots)
cr0x@server:~$ qm start 120
Starting VM 120
cr0x@server:~$ qm status 120
status: running
Decision: open the Proxmox console and observe. If it fails, you want the exact error string. “It didn’t boot” is not a symptom; it’s a confession.
Windows imports: VirtIO drivers, boot fixes, and sane sequencing
Windows doesn’t hate you personally. It just expects storage and network controllers to stay consistent across boots, and it expects the needed drivers to exist before it tries to mount the boot volume.
The safest Windows sequence is:
- Boot on a “universal” controller (SATA) and a “universal” NIC model (E1000) if needed.
- Install VirtIO drivers inside Windows.
- Switch disk controller to VirtIO SCSI (or VirtIO Block) and NIC to VirtIO.
- Reboot and validate.
VirtIO driver strategy that actually works
Mount the VirtIO driver ISO in Proxmox, then install drivers from Device Manager or run the installer if available. The key is: the storage driver must be present and enabled at boot.
Task 16: Add VirtIO driver ISO (example assumes ISO already uploaded)
cr0x@server:~$ qm set 120 --ide2 local:iso/virtio-win.iso,media=cdrom
update VM 120: -ide2 local:iso/virtio-win.iso,media=cdrom
Decision: if Windows boots but has no network, that’s fine. Don’t panic and “optimize” anything yet. Install drivers first.
Task 17: Switch NIC to a compatibility model temporarily (if needed)
cr0x@server:~$ qm set 120 --net0 e1000,bridge=vmbr0
update VM 120: -net0 e1000,bridge=vmbr0
What it means: you’re using an emulated Intel NIC. Slower, but Windows understands it without extra drivers.
Decision: if you can’t get into the box to install VirtIO, use E1000 to regain network long enough to finish driver work.
After drivers: move to VirtIO SCSI
VirtIO SCSI is the sweet spot for most workloads. It supports TRIM/UNMAP behavior better in many setups and is broadly tested.
Task 18: Add VirtIO SCSI controller and move disk
cr0x@server:~$ qm set 120 --scsihw virtio-scsi-single
update VM 120: -scsihw virtio-scsi-single
cr0x@server:~$ qm set 120 --scsi0 zfspool:vm-120-disk-0
update VM 120: -scsi0 zfspool:vm-120-disk-0
cr0x@server:~$ qm set 120 --delete sata0
update VM 120: delete sata0
Decision: only delete the SATA attachment after VirtIO drivers are installed and you’re confident the next boot will find the disk.
Task 19: Switch NIC to VirtIO after Windows has the driver
cr0x@server:~$ qm set 120 --net0 virtio,bridge=vmbr0
update VM 120: -net0 virtio,bridge=vmbr0
What it means: Windows will see a new NIC. Your IP config might not follow automatically if it was bound to the old adapter.
Decision: plan for IP reconfiguration or scripting. For servers with static IPs, expect a short window of “where did my IP go?”
Windows boot failures you’ll actually see
- INACCESSIBLE_BOOT_DEVICE (BSOD): storage driver mismatch. Booted fine on SATA, broke after switching to VirtIO before installing drivers.
- Stuck on spinning dots: sometimes driver issues, sometimes firmware mismatch, sometimes a corrupted BCD after conversion.
- No boot device: wrong boot order or UEFI/BIOS mismatch.
Joke #2: Windows drivers are like umbrellas—you only realize you needed one after you’re already soaked.
Linux imports: initramfs, NIC names, and filesystem UUID traps
Linux migrations are usually easier than Windows, right up until they aren’t. The common breakpoints are:
- initramfs doesn’t include the driver for the new disk controller (VirtIO) so the root filesystem never appears.
- Network interface naming changes (ens160 on VMware becomes ens18 on Proxmox) and your network config points at a ghost device.
- Bootloader entries reference old UUIDs, or the conversion changed something enough that the kernel command line can’t find root.
Task 20: Get the VM’s disk partition map from the Proxmox host (without booting)
cr0x@server:~$ ls -l /dev/zvol/zfspool/vm-120-disk-0
lrwxrwxrwx 1 root root 13 Dec 28 10:12 /dev/zvol/zfspool/vm-120-disk-0 -> ../../zd0
cr0x@server:~$ partprobe /dev/zd0 && lsblk -f /dev/zd0
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
zd0
├─zd0p1 vfat FAT32 1A2B-3C4D
└─zd0p2 ext4 1.0 11111111-2222-3333-4444-555555555555
What it means: likely UEFI (vfat ESP) plus ext4 root. That supports your earlier firmware decision.
Decision: if you expected BIOS/MBR but see an ESP, switch the VM to OVMF and add an EFI disk.
Task 21: If Linux boots but has no network, identify the new NIC name
cr0x@server:~$ qm guest exec 120 -- ip -br a
{
"exitcode": 0,
"out-data": "lo UNKNOWN 127.0.0.1/8 ::1/128\nens18 DOWN \n",
"err-data": ""
}
What it means: the VM sees ens18 but it’s down/unconfigured.
Decision: update your netplan/systemd-networkd/ifcfg scripts to reference the new interface name, or use MAC-based matching.
Task 22: Validate the VM sees the disk on VirtIO after switching controllers
cr0x@server:~$ qm guest exec 120 -- lsblk
{
"exitcode": 0,
"out-data": "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS\nvda 252:0 0 40G 0 disk \nvda1 252:1 0 512M 0 part /boot/efi\nvda2 252:2 0 39.5G 0 part /\n",
"err-data": ""
}
What it means: disk appears as /dev/vda (VirtIO). Good.
Decision: if your /etc/fstab references /dev/sda2 instead of UUIDs, fix it before the next reboot.
Task 23: Check initramfs includes VirtIO modules (inside guest)
cr0x@server:~$ qm guest exec 120 -- bash -lc 'lsinitramfs /boot/initrd.img-$(uname -r) | grep -E "virtio|vda|scsi" | head'
{
"exitcode": 0,
"out-data": "usr/lib/modules/6.5.0-28-generic/kernel/drivers/block/virtio_blk.ko\nusr/lib/modules/6.5.0-28-generic/kernel/drivers/scsi/virtio_scsi.ko\n",
"err-data": ""
}
Decision: if VirtIO modules are missing, rebuild initramfs before switching controllers permanently (update-initramfs -u on Debian/Ubuntu, dracut -f on RHEL-like).
NIC mapping: from vSwitches to Linux bridges
ESXi networking is portgroups on vSwitches (standard or distributed). Proxmox networking is Linux bridges (often vmbr0) and optional VLAN tagging. The pitfall is thinking “same VLAN ID” equals “same connectivity.” It doesn’t if the upstream trunking or bridge VLAN-awareness differs.
Map it deliberately
- ESXi portgroup VLAN 0/none → Proxmox bridge untagged on the same physical network.
- ESXi portgroup VLAN X → Proxmox VM NIC tagged VLAN X (
tag=X) on a VLAN-aware bridge, or use a dedicated VLAN subinterface/bridge. - Multiple NICs → map each to the correct bridge, don’t “just put everything on vmbr0” unless you want accidental lateral movement.
Task 24: Show Proxmox bridge VLAN awareness (host-side)
cr0x@server:~$ grep -A4 -B2 -n "vmbr0" /etc/network/interfaces
7:auto vmbr0
8:iface vmbr0 inet static
9: address 10.10.10.5/24
10: gateway 10.10.10.1
11: bridge-ports eno1
12: bridge-stp off
13: bridge-fd 0
14: bridge-vlan-aware yes
What it means: VLAN-aware bridge. You can tag VM NICs without extra Linux VLAN interfaces.
Decision: if bridge-vlan-aware is not enabled but you need tags, either enable it (carefully) or build the alternative design. Don’t wing it on a production host during business hours.
Task 25: Set a tagged VLAN on a VM NIC
cr0x@server:~$ qm set 120 --net0 virtio,bridge=vmbr0,tag=30
update VM 120: -net0 virtio,bridge=vmbr0,tag=30
Decision: if the guest still can’t reach its gateway, confirm the upstream switch port is trunking VLAN 30 and the host bridge is on the right physical interface.
Fast diagnosis playbook
When the imported VM “doesn’t work,” you need to find the bottleneck fast. Here’s the order that saves time.
First: Does it boot, and what exact failure do you see?
- No bootable device / drops to UEFI shell → wrong firmware mode or boot order, or disk not attached.
- GRUB rescue / kernel panic: unable to mount root → disk controller/driver mismatch or fstab/root UUID mismatch.
- Windows BSOD INACCESSIBLE_BOOT_DEVICE → storage driver not present for controller you selected.
Second: Is the disk present and where?
Check on the Proxmox host: did the disk import correctly and is it attached to the VM on the intended bus?
cr0x@server:~$ qm config 120 | egrep "boot|bios|efidisk|scsi|sata|ide"
bios: ovmf
boot: order=scsi0
efidisk0: zfspool:vm-120-disk-1,pre-enrolled-keys=1,size=4M
ide2: local:iso/virtio-win.iso,media=cdrom
scsi0: zfspool:vm-120-disk-0
scsihw: virtio-scsi-single
Decision: if the disk is on the wrong bus (e.g., you meant SATA for first boot), fix that before touching anything in the guest.
Third: Is networking broken, or just NIC identity changed?
- Guest boots but no ping → likely VLAN/bridge mismatch or missing drivers/NIC rename.
- Ping works but services down → application issue, firewall, or changed IP config.
Fourth: Is performance the “problem” or is the host saturated?
Don’t tune the VM until you know whether the host is the bottleneck.
cr0x@server:~$ pvesh get /nodes/$(hostname)/status | egrep '"cpu"|mem|swap'
"cpu": 0.23,
"memory": {
"free": 5826394112,
"total": 34359738368,
"used": 28533344256
},
"swap": {
"free": 8589934592,
"total": 8589934592,
"used": 0
}
Decision: if memory is tight or CPU is pinned, the “slow VM” might be an overloaded host. Fix the platform first.
Common mistakes: symptom → root cause → fix
1) Symptom: “No bootable device” after import
Root cause: firmware mismatch (UEFI vs BIOS) or wrong boot order, or disk not attached on any bus.
Fix: confirm VMX firmware, then match it in Proxmox (--bios seabios or --bios ovmf + efidisk). Set boot order explicitly.
2) Symptom: Windows BSOD INACCESSIBLE_BOOT_DEVICE
Root cause: you attached the boot disk on VirtIO SCSI/Block before Windows had the VirtIO storage driver enabled.
Fix: revert the disk bus to SATA, boot, install VirtIO drivers, then switch back to VirtIO SCSI.
3) Symptom: Linux drops to initramfs or kernel panic “unable to mount root”
Root cause: initramfs doesn’t include VirtIO modules, or root= references a device name that changed (sda → vda), or fstab uses device paths.
Fix: boot via SATA temporarily, rebuild initramfs, update fstab to UUIDs, then switch to VirtIO.
4) Symptom: VM boots but has no network
Root cause: NIC model changed and driver missing (Windows) or interface name changed (Linux), or VLAN tagging/bridge is wrong.
Fix: temporarily use E1000 on Windows to regain access; on Linux, update network config for new NIC name. Validate bridge and VLAN tagging on host.
5) Symptom: Network works but services are unreachable from outside
Root cause: firewall profile changed, Windows thinks it’s on a “Public” network, or IP moved to a hidden/old adapter.
Fix: rebind IP to active adapter, check Windows firewall profile, verify routes and listening sockets.
6) Symptom: Disk performance is terrible after import
Root cause: you kept an emulated controller (IDE/SATA) or used qcow2 on top of storage that already does copy-on-write poorly, or host is I/O bound.
Fix: move to VirtIO SCSI and enable iothreads where appropriate; prefer raw on ZFS/LVM-thin; verify host latency and storage health first.
7) Symptom: Time drift or weird clock jumps
Root cause: VM tools changes (VMware tools removed, QEMU guest agent not installed), wrong time sync source, or Windows time service re-evaluating hardware.
Fix: install and enable QEMU guest agent; configure NTP/chrony/systemd-timesyncd; verify Windows time service and domain time hierarchy.
8) Symptom: VM won’t start, Proxmox reports lock or config errors
Root cause: leftover lock, storage not available, wrong volume ID, or bad config edits.
Fix: inspect qm config, verify storage status, clear lock only when you’re sure nothing is running.
cr0x@server:~$ qm unlock 120
unlock VM 120
Checklists / step-by-step plan
Checklist A: Before you touch anything
- Confirm maintenance window and rollback plan (keep original VM powered off but intact).
- Record ESXi VMX: firmware, disk controller, NIC model, number of disks, and MAC addresses.
- Record IPs, routes, and any VLAN requirements.
- Confirm Proxmox storage target has space and supports the features you need (snapshots/replication).
Checklist B: Disk export and conversion
- Power off the VM (preferred).
- Copy VMDK descriptor and extents (or export OVA).
- Inspect descriptor: single vs multi-extent, adapter type notes.
- Convert with
qemu-imgto raw (or qcow2 if file-based storage needs it). - Verify with
qemu-img info.
Checklist C: First boot in Proxmox (stability-first)
- Create VM with conservative settings (SATA disk, E1000 NIC if Windows).
- Match firmware (OVMF vs SeaBIOS).
- Set boot order explicitly.
- Boot and confirm OS loads.
- Fix networking inside guest if NIC name/model changed.
Checklist D: Performance pass (after it boots cleanly)
- Install QEMU guest agent.
- Install VirtIO drivers (Windows).
- Switch disk to VirtIO SCSI and NIC to VirtIO.
- Reboot and validate disk + network + application.
- Take a Proxmox backup and/or snapshot only after validation.
Task 26: Enable QEMU guest agent in Proxmox config
cr0x@server:~$ qm set 120 --agent enabled=1
update VM 120: -agent enabled=1
Decision: if you rely on clean shutdowns, IP reporting, and scripted actions, enable the agent and install it inside the guest.
Task 27: Confirm guest agent is responding
cr0x@server:~$ qm guest ping 120
qemu-agent is running
Decision: if it’s not running, don’t assume Proxmox is wrong—install/start the agent service in the guest and check firewalling/SELinux policies where applicable.
Task 28: Take a backup after validation
cr0x@server:~$ vzdump 120 --storage local --mode snapshot --compress zstd
INFO: starting new backup job: vzdump 120 --storage local --mode snapshot --compress zstd
INFO: VM 120: starting backup
INFO: VM 120: backup finished
INFO: Finished Backup of VM 120 (00:02:41)
What it means: you now have a Proxmox-native backup you can restore quickly if the next change breaks boot.
Decision: do this before you start “tuning.” Backups are cheaper than heroics.
Three corporate mini-stories from the trenches
Mini-story 1: The incident caused by a wrong assumption
They migrated a Windows file server from ESXi to Proxmox during a routine hypervisor refresh. The engineer doing the work had done half a dozen Linux moves successfully and assumed Windows would behave similarly: “Attach the disk as VirtIO, it’s faster, we’ll install drivers later.” That assumption has a very specific BSOD code.
The VM came up, hit INACCESSIBLE_BOOT_DEVICE, and then got stuck in automated repair loops. The team tried the classic bag of tricks: changing CPU type, toggling machine type, even “maybe it’s secure boot.” Each reboot made the situation noisier, not better. Meanwhile, a department was emailing screenshots of missing shared drives like it was a new art project.
The fix was painfully simple. They moved the disk attachment back to SATA, booted successfully, installed VirtIO storage and network drivers from the ISO, and only then switched to VirtIO SCSI. Windows recovered immediately because nothing was corrupted; it was just missing the driver at boot.
The postmortem takeaway was not “VirtIO is bad.” The takeaway was: don’t change boot-critical hardware before the guest can drive it. A migration is already a big change; you don’t need to stack performance tuning on top of it during first boot.
Mini-story 2: The optimization that backfired
A different team decided they’d be clever about storage: import everything into qcow2 because “snapshots are easier,” then store those qcow2 files on a copy-on-write backend. That’s not inherently wrong—until you do it at scale with write-heavy workloads and no one models the amplification.
Within days, the symptom was “Proxmox is slow.” VM disk latency spiked. Backups took longer and started overlapping with business hours. People began blaming the network because it’s always fashionable to blame the network.
The actual problem was layered copy-on-write behavior in an environment that already had snapshots and churn. Random writes turned into a small tax festival. Metadata overhead climbed. The hosts weren’t broken; the architecture was.
They fixed it by converting high-churn volumes to raw on a more appropriate backend and keeping qcow2 only where it had a clear operational benefit. The performance returned, and so did everyone’s ability to stop guessing.
Optimization is great. Just don’t optimize two layers at once and then act surprised when physics notices.
Mini-story 3: The boring but correct practice that saved the day
One organization had a migration runbook that looked painfully conservative. Every imported VM booted first on SATA and a compatibility NIC if the OS was unknown. They took a backup immediately after first successful boot, then proceeded with driver installation and VirtIO conversion as a second phase.
It was slow. It wasn’t glamorous. And it prevented a nasty outage when a legacy Linux appliance VM booted fine on SATA but failed on VirtIO because its initramfs was missing modules and the vendor image was locked down. If they’d started with VirtIO, they would have had a dead VM and a support ticket lottery.
Because they had a backup taken at the “known good boot” point, they could revert changes quickly while they experimented with controller types. They ultimately kept that particular appliance on SATA permanently, accepting the performance hit because the VM’s workload didn’t justify the risk.
That’s the real grown-up move: choose boring reliability over theoretical throughput when the business impact doesn’t pay for the risk.
FAQ
1) Should I convert VMDK to raw or qcow2 for Proxmox?
For ZFS or LVM-thin, I usually choose raw. It’s simpler and often faster. Choose qcow2 when you specifically need file-based snapshots/compression on a directory storage.
2) Can Proxmox import an OVA directly?
Proxmox can work with the disks inside an OVA, but you often end up extracting the OVA (it’s a tar archive) and converting the VMDK(s) yourself. That gives you more control and fewer surprises.
3) My VM used VMware PVSCSI. What do I pick in Proxmox?
Use SATA for first boot if you’re cautious, then move to VirtIO SCSI after drivers are in place. PVSCSI is VMware-specific; there’s no direct “PVSCSI equivalent” you can just toggle on.
4) Windows boots on SATA but fails on VirtIO SCSI. Why?
The VirtIO storage driver isn’t installed/enabled early enough for boot. Boot on SATA, install VirtIO drivers, then switch the controller and reboot.
5) Linux boots but the NIC name changed and the network is down. What’s the clean fix?
Update your network configuration to match the new interface name, or use MAC-based matching rules. Avoid hardcoding eth0 assumptions; that ship sailed years ago.
6) What’s the most common UEFI/BIOS mistake?
Importing a UEFI-installed guest and booting it with SeaBIOS (or vice versa). Confirm with the ESXi VMX and/or the presence of an EFI System Partition.
7) Do I need the QEMU guest agent?
Need? No. Want? Yes. It improves shutdown behavior, IP reporting, and automation, and it reduces the number of “why is Proxmox lying” arguments.
8) My imported Windows VM lost its static IP. Where did it go?
Windows treats the new NIC as a different adapter. The old adapter may still “own” the IP configuration but be hidden/disconnected. Reassign the IP to the active adapter and clean up stale NICs if needed.
9) Can I keep the VM on SATA forever?
Yes. For light workloads, SATA overhead may be irrelevant. For heavy I/O, VirtIO SCSI is worth it once stability is proven.
10) How do I know whether the bottleneck is Proxmox host storage or the guest?
Start with host metrics and storage health. If the host is swapping, CPU pinned, or storage latency is high, changing guest controller types won’t save you.
Conclusion: next steps you can do today
If you want the migration to feel boring (the highest compliment in ops), do it in two phases: bootability first, performance second. Match firmware. Attach the disk on a conservative controller. Make it boot. Then add VirtIO drivers, switch buses, and only then celebrate.
Concrete next steps:
- Pick one VM and run the “stability-first” workflow: import disk, boot on SATA, validate services.
- Install QEMU guest agent and take a Proxmox backup at the first known-good point.
- For Windows: mount VirtIO ISO, install storage + net drivers, then migrate to VirtIO SCSI + VirtIO NIC.
- For Linux: verify initramfs contains VirtIO modules, convert fstab to UUIDs if needed, then switch controllers.
- Only after success: standardize templates and document your NIC/VLAN mapping so the next migration isn’t archaeology.