ESXi to Proxmox V2V Conversion: Best Methods and Pitfalls

Was this helpful?

You don’t migrate hypervisors because it’s fun. You migrate because licensing changed, hardware aged out, your VM sprawl turned into a museum,
or your CFO discovered “subscription” as a hobby. The hard part isn’t converting disks. It’s landing the VM on Proxmox in a way that boots, performs,
and doesn’t surprise you three days later when backups fail and latency spikes.

This is a production-minded guide to moving VMs from ESXi/vSphere to Proxmox VE (KVM/QEMU). We’ll cover three main approaches—OVF/OVA, direct disk conversion with qemu-img, and block-level cloning with Clonezilla—plus the traps that bite ops teams: snapshots, thin provisioning, UEFI/BIOS mismatches, Windows driver gaps, storage alignment, and networking changes that don’t show up until the first reboot window.

Interesting facts & small history (so decisions make sense)

  • OVF was designed for portability, not perfection. It’s a packaging standard (descriptor + disks), but vendors love “extensions,” and that’s where portability goes to die.
  • VMDK predates modern “cloud thinking.” It grew up in a world of SAN-backed thick disks, so thin-provisioned edge cases are mostly “works until it doesn’t.”
  • KVM became a Linux kernel module in 2007. That one change (hardware-assisted virtualization in-kernel) is why Proxmox isn’t a science project.
  • Virtio is a performance contract. It’s not just “a driver,” it’s a paravirtualized interface that trades compatibility for speed—great after drivers exist, painful before.
  • QCOW2 was built for snapshots and flexibility. RAW is dumb-fast; QCOW2 is operationally convenient. Pick based on storage backend and ops habits.
  • VMware snapshots were never backups. They’re redirect logs; leave them lying around and you’ve built a latency machine that eventually runs out of space.
  • UEFI vs BIOS is a migration era fault line. Many older VMs are BIOS/MBR; modern templates are UEFI/GPT. Converting disks is easier than converting boot assumptions.
  • ESXi tools hide some device identity complexity. When you move to KVM, MACs, NIC models, and storage controllers suddenly matter again.
  • Proxmox’s “qm importdisk” is opinionated. It wants to attach imported disks to a VM config and storage target; it’s not a generic converter, it’s a workflow tool.

One paraphrased idea from Gene Kim that holds up in every migration: Improve the system of work, not heroics; reliability comes from repeatable flow. (paraphrased idea)

Pick your method: OVF/OVA vs qemu-img vs Clonezilla

My opinionated decision rule

  • If you can shut down the VM and you can access VMDKs: use qemu-img + Proxmox import. You’ll get control, predictable outputs, and fewer “magic” layers.
  • If you need vendor-ish portability or you’re stuck with vCenter export workflows: use OVF/OVA, but treat it as a packaging method, not a guarantee.
  • If the VM is an appliance with weird bootloaders or you don’t trust disk formats: use Clonezilla and clone at the filesystem/block level.

Tradeoffs that matter in production

  • Downtime: all three prefer downtime. Live conversions exist; they’re also where you learn new meanings of the word “inconsistent.”
  • Fidelity: Clonezilla can be highest fidelity for “what’s on disk,” but it’s blind to virtualization semantics (controllers, NIC type). OVF can preserve more metadata, sometimes.
  • Speed: qemu-img to RAW on fast local storage is usually the fastest. OVF exports can be slow because they often repackage and sometimes compress.
  • Debuggability: qemu-img and raw disk files are easy to inspect. OVA is a tarball with a descriptor—fine, but another wrapper when you’re already stressed.

Joke #1: V2V is like moving apartments—packing is easy, finding the box with the coffee maker is the actual outage.

Pre-flight: inventory, freeze, and design decisions

What you must decide before touching a single disk

  • Boot mode: BIOS or UEFI on the destination VM. Match the source unless you enjoy bootloader archaeology.
  • Disk bus: VirtIO SCSI is usually the right choice on Proxmox. SATA is the compatibility parachute. IDE is for museums.
  • NIC model: VirtIO for performance; E1000 if you need “it boots without drivers.”
  • Storage backend: ZFS, LVM-thin, or Ceph. This changes whether RAW vs QCOW2 is smart, and it changes performance tuning.
  • Networking: Bridge names, VLAN tagging, MTU, and whether MAC changes will trigger licensing or DHCP reservations.
  • Rollback plan: Keep the ESXi VM powered off but intact until the Proxmox VM survives at least one reboot and a backup cycle.

Pre-flight tasks (with commands, outputs, and decisions)

Task 1: Confirm Proxmox node virtualization support

cr0x@pve1:~$ egrep -m1 -o 'vmx|svm' /proc/cpuinfo
vmx

What it means: vmx (Intel VT-x) or svm (AMD-V) must be present.

Decision: If missing, stop. You can’t do KVM properly; you’ll get QEMU TCG emulation and sadness.

Task 2: Check Proxmox storage types and free space

cr0x@pve1:~$ pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active        196473240        5123456       185678912        2.61%
local-zfs         zfspool active       1889785616      934281216       955504400       49.44%

What it means: You have a directory store (local) and ZFS pool (local-zfs) with about 955GB free.

Decision: Pick the target. For ZFS, RAW is fine because ZFS already does copy-on-write and snapshots.

Task 3: Inventory source VM hardware in ESXi (from exported config)

cr0x@jump:~$ grep -E 'firmware|scsi|ethernet|nvram|virtualHW' vm1/vm1.vmx
virtualHW.version = "14"
firmware = "efi"
scsi0.virtualDev = "pvscsi"
ethernet0.virtualDev = "vmxnet3"
nvram = "vm1.nvram"

What it means: UEFI firmware, PVSCSI controller, vmxnet3 NIC.

Decision: On Proxmox, plan UEFI (OVMF) and VirtIO SCSI + VirtIO net (or E1000 temporarily for Windows if drivers aren’t staged).

Task 4: Detect ESXi snapshots before exporting

cr0x@esxi1:~$ vim-cmd vmsvc/getallvms | grep -i vm1
12   vm1   [datastore1] vm1/vm1.vmx   ubuntu64Guest   vmx-14

cr0x@esxi1:~$ vim-cmd vmsvc/snapshot.get 12
Get Snapshot:
|--ROOT
   |--Snapshot Name        : pre-upgrade
   |--Snapshot Id          : 1
   |--Snapshot Created On  : 2025-12-01
   |--Snapshot State       : powered off

What it means: Snapshots exist. Exports may capture delta chains or consolidate implicitly.

Decision: Consolidate snapshots before conversion unless you have a very specific reason not to.

Task 5: Confirm VM is shut down (consistency over heroics)

cr0x@esxi1:~$ vim-cmd vmsvc/power.getstate 12
Retrieved runtime info
Powered off

What it means: Powered off. This is the clean state you want for a disk-level conversion.

Decision: Proceed. If it’s powered on, schedule downtime or use guest-level quiescing and accept more risk.

Method 1: OVF/OVA export + Proxmox import

When OVF/OVA is the right move

  • You have vCenter workflows and auditors that want “an export artifact.”
  • You’re moving a small number of VMs and convenience matters more than maximal speed.
  • You want a single file (OVA) you can archive, checksum, and hand to another team.

Where OVF/OVA hurts

  • NIC/controller metadata doesn’t map cleanly to KVM devices.
  • Some exports produce “streamOptimized” VMDKs, which are valid but add a conversion step.
  • OVF descriptors can include VMware-specific sections; Proxmox ignores most of it.

Practical OVF/OVA tasks

Task 6: Inspect an OVA before you trust it

cr0x@pve1:~$ tar -tf vm1.ova | head
vm1.ovf
vm1.mf
vm1-disk1.vmdk

What it means: OVA is a tar containing an OVF descriptor, manifest, and disk.

Decision: If disks are missing or you see multiple disks you didn’t expect, stop and verify the source VM layout.

Task 7: Validate the OVF manifest checksums

cr0x@pve1:~$ sha1sum -c vm1.mf
vm1.ovf: OK
vm1-disk1.vmdk: OK

What it means: Integrity checks pass. If it fails, don’t “try anyway.” Re-export or re-transfer.

Decision: Proceed only when checksums are OK; corruption here wastes hours later.

Task 8: Create an empty VM shell in Proxmox that matches boot mode

cr0x@pve1:~$ qm create 120 --name vm1 --memory 8192 --cores 4 --machine q35 --bios ovmf --net0 virtio,bridge=vmbr0
create VM 120: success

What it means: VM 120 exists, UEFI firmware, VirtIO net.

Decision: If the source was BIOS, use --bios seabios instead. Don’t “upgrade” firmware during migration unless you planned it.

Task 9: Convert the OVA VMDK to RAW or QCOW2 and import

cr0x@pve1:~$ qemu-img info vm1-disk1.vmdk
image: vm1-disk1.vmdk
file format: vmdk
virtual size: 80G (85899345920 bytes)
disk size: 22G
cluster_size: 65536
Format specific information:
    cid: 2155942229
    parent cid: 4294967295
    create type: streamOptimized

cr0x@pve1:~$ qemu-img convert -p -f vmdk -O raw vm1-disk1.vmdk /var/lib/vz/images/120/vm-120-disk-0.raw
    (progress output omitted)

What it means: It’s a streamOptimized VMDK. Conversion is required; Proxmox won’t “run” that directly.

Decision: Use RAW for ZFS/LVM-thin backends unless you explicitly want QCOW2 features on dir storage.

Task 10: Attach the imported disk and set boot order

cr0x@pve1:~$ qm set 120 --scsihw virtio-scsi-pci --scsi0 local-zfs:vm-120-disk-0
update VM 120: -scsihw virtio-scsi-pci -scsi0 local-zfs:vm-120-disk-0

cr0x@pve1:~$ qm set 120 --boot order=scsi0
update VM 120: -boot order=scsi0

What it means: Disk attached as VirtIO SCSI and used as boot device.

Decision: If Windows won’t boot, temporarily attach as SATA and fix drivers, then switch back.

Method 2: VMDK → QCOW2/RAW with qemu-img (my default)

Why this method wins most of the time

If you can get the VMDK(s) off the datastore (or access them over NFS/SSH from an ESXi shell), qemu-img gives you
determinism. You control formats, you can inspect the chain, you can choose sparse or fully allocated targets, and you can run the
conversion on the Proxmox node right next to the destination storage.

The two conversions you actually want

  • VMDK → RAW: best when the target storage already does snapshots/COW (ZFS, Ceph RBD). Fast, simple, fewer layers.
  • VMDK → QCOW2: useful on directory storage when you want VM-level snapshots (and accept overhead), or for portability.

Practical qemu-img tasks

Task 11: Confirm the VMDK isn’t a snapshot chain you forgot about

cr0x@pve1:~$ qemu-img info -U vm1-flat.vmdk
image: vm1-flat.vmdk
file format: raw
virtual size: 80G (85899345920 bytes)
disk size: 80G

What it means: This one is a raw “flat” extent, not a delta chain. That’s what you want.

Decision: If you see a parent/child relationship (backing file), stop and consolidate on ESXi first.

Task 12: Convert with progress and reasonable defaults

cr0x@pve1:~$ qemu-img convert -p -f vmdk -O raw vm1.vmdk /tank/images/120/vm-120-disk-0.raw
    (progress output omitted)

What it means: Disk conversion runs. Time depends on disk size, actual used blocks, and storage speed.

Decision: If the conversion is slow, don’t guess. Use the fast diagnosis playbook below to identify whether you’re CPU, network, or disk bound.

Task 13: Import disk into Proxmox storage properly (qm importdisk)

cr0x@pve1:~$ qm importdisk 120 /tank/images/120/vm-120-disk-0.raw local-zfs
importing disk '/tank/images/120/vm-120-disk-0.raw' to VM 120 ...
transferred 0.0 B of 80.0 GiB (0.00%)
transferred 80.0 GiB of 80.0 GiB (100.00%)
Successfully imported disk as 'local-zfs:vm-120-disk-0'

What it means: Proxmox imported the disk into the target storage and created a volume reference.

Decision: Prefer qm importdisk when possible; it keeps storage bookkeeping sane.

Task 14: Confirm the VM config references the right disk and controller

cr0x@pve1:~$ qm config 120
bios: ovmf
boot: order=scsi0
cores: 4
memory: 8192
name: vm1
net0: virtio=DE:AD:BE:EF:12:20,bridge=vmbr0
scsi0: local-zfs:vm-120-disk-0
scsihw: virtio-scsi-pci

What it means: Disk is on scsi0 with VirtIO SCSI. UEFI is enabled.

Decision: If the boot disk isn’t on the controller you expect, fix it now before first boot.

Method 3: Clonezilla (when you need “dumb but reliable”)

When Clonezilla is the right tool

  • Appliance VMs with odd partitioning, custom bootloaders, or vendor kernels you don’t want to touch.
  • Situations where disk formats are messy (snapshot chains, streamOptimized exports, unknown VMDK variants).
  • When you need to migrate across entirely different storage layouts and prefer file-level copies of used blocks.

What Clonezilla will not do for you

  • It won’t fix VirtIO drivers in Windows.
  • It won’t map your VMware NIC model to Proxmox semantics.
  • It won’t make your boot mode match; you still have to configure BIOS/UEFI correctly.

Practical Clonezilla tasks (and the operational angle)

Task 15: Create the destination VM with conservative devices for first boot

cr0x@pve1:~$ qm create 130 --name vm1-clone --memory 8192 --cores 4 --machine q35 --bios seabios --net0 e1000,bridge=vmbr0 --sata0 local-zfs:80
create VM 130: success

What it means: A BIOS VM with an E1000 NIC and SATA disk. Boring, compatible defaults.

Decision: Use this if you’re unsure about guest drivers. Once stable, switch to VirtIO for performance.

Task 16: Confirm the new disk exists and is the right size before cloning

cr0x@pve1:~$ lsblk -o NAME,SIZE,TYPE,MODEL | grep -E 'zd|sd'
sda   447.1G disk  Samsung_SSD
zd0    80G   disk

What it means: Proxmox created a virtual disk (ZFS zvol shows as zd0).

Decision: If size is wrong, delete and recreate. Cloning into the wrong size is a great way to meet your on-call future self.

With Clonezilla you typically boot both source and destination in some controlled fashion (ISO boot, network share for images, etc.).
The commands are less “type this” and more “choose this menu item,” so the operational task is about choosing conservative virtual hardware first.
Once the guest boots, you can iterate devices and drivers.

Guest OS specifics: Windows, Linux, appliances

Windows: drivers and boot are the whole game

Windows migrations fail for two reasons: storage controller drivers and boot mode mismatch. Everything else is secondary.
If your Windows VM used VMware PVSCSI and vmxnet3, it won’t automatically have VirtIO storage and VirtIO net drivers installed.
You can still boot if you choose compatible devices first (SATA + E1000), install VirtIO drivers, then switch controllers.

Task 17: Add VirtIO driver ISO and a “safe NIC” for first boot

cr0x@pve1:~$ qm set 120 --ide2 local:iso/virtio-win.iso,media=cdrom
update VM 120: -ide2 local:iso/virtio-win.iso,media=cdrom

cr0x@pve1:~$ qm set 120 --net0 e1000,bridge=vmbr0
update VM 120: -net0 e1000,bridge=vmbr0

What it means: VirtIO drivers are available to the guest; NIC is set to E1000 for compatibility.

Decision: Boot, install VirtIO drivers, then switch net0 back to VirtIO.

Task 18: After driver install, switch disk to VirtIO SCSI (maintenance window)

cr0x@pve1:~$ qm set 120 --scsihw virtio-scsi-pci
update VM 120: -scsihw virtio-scsi-pci

What it means: Controller type set. You may still need to move the disk from SATA to SCSI in config if you used SATA as a bridge.

Decision: Change one device class at a time: NIC, then storage. Don’t stack uncertainty.

Linux: usually fine, except when it isn’t

Modern Linux kernels include VirtIO drivers. The common failure mode is initramfs not including the right storage driver if the VM was ancient,
or the network interface name changes (predictable network names) and your static config points at a now-nonexistent interface.

Task 19: If Linux boots but networking is dead, confirm link state from Proxmox

cr0x@pve1:~$ qm monitor 120
Enter QEMU Monitor for VM 120 - type 'help' for help
(qemu) info network
hub 0
  netdev net0, peer=(null)
    virtio-net-pci.0: mac=DE:AD:BE:EF:12:20
    link=up
(qemu) quit

What it means: From QEMU’s view, link is up. If the guest has no network, it’s likely inside-guest config.

Decision: Check udev/predictable naming changes, static IP configs, and firewall rules inside the guest.

Appliances: treat them like black boxes

Vendor appliances often pin kernel modules to specific device IDs. Your best strategy is: conservative virtual hardware first (SATA, E1000),
boot, confirm services, then attempt VirtIO only if the vendor supports it.

Storage on Proxmox: ZFS/LVM-thin/Ceph and why it matters

ZFS: great defaults, sharp edges

ZFS is fantastic for Proxmox because snapshots and replication are native, and scrubs tell you the truth about your disks.
But ZFS is also very honest: if you overcommit RAM, starve ARC, or pick the wrong recordsize/volblocksize for zvols, it will let you feel it.

Task 20: Check ZFS pool health before importing large disks

cr0x@pve1:~$ zpool status -x
all pools are healthy

What it means: No known errors. Importing to a degraded pool is how you turn migration into data recovery.

Decision: If not healthy, fix hardware/pool issues first.

Task 21: Watch ZFS write amplification symptoms during conversion

cr0x@pve1:~$ zpool iostat -v 2
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank        890G   890G     12    420   3.1M   210M
  mirror    890G   890G     12    420   3.1M   210M
    sda        -      -      6    210   1.6M   105M
    sdb        -      -      6    210   1.5M   105M

What it means: Writes are dominating during conversion. That’s normal; conversions are write-heavy.

Decision: If bandwidth is far lower than expected, you’re likely constrained by compression, sync settings, or underlying disks.

LVM-thin: simple and fast, fewer features

LVM-thin is straightforward and performs well for many workloads. Snapshots exist but are not the same operational experience as ZFS.
RAW images on LVM-thin are typical; QCOW2 adds overhead without giving you much.

Ceph: great at scale, punishing when mis-sized

Ceph RBD likes RAW. Also, Ceph magnifies small mistakes: wrong network, wrong MTU, wrong replication assumptions, and your “simple migration”
becomes a storage latency seminar. If you’re migrating into Ceph, benchmark and validate first.

Joke #2: Ceph is like a group project—when it works, it’s beautiful, and when it doesn’t, everyone blames networking.

Networking migration: bridges, VLANs, MACs, and surprise firewalls

Bridge mapping: vmbr0 is not a universal constant

On ESXi, port groups abstract a lot. On Proxmox, Linux bridges and VLAN-aware bridges are explicit. That’s good. It’s also where mistakes are visible.
A VM can be “running” and still be on the wrong L2 island.

Task 22: Confirm bridge and VLAN-aware settings on Proxmox

cr0x@pve1:~$ cat /etc/network/interfaces | sed -n '1,120p'
auto lo
iface lo inet loopback

auto enp3s0
iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.10.10.11/24
        gateway 10.10.10.1
        bridge-ports enp3s0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes

What it means: vmbr0 is VLAN-aware. That’s the common “do it once, do it right” setting.

Decision: If you need tagged VLANs per VM, keep VLAN-aware and set tags on VM NICs.

Task 23: Confirm VM NIC tag and MAC in config

cr0x@pve1:~$ qm config 120 | grep -E '^net0'
net0: virtio=DE:AD:BE:EF:12:20,bridge=vmbr0,tag=120

What it means: VM is on VLAN 120 via tag.

Decision: If the guest can’t reach gateway, confirm upstream switch trunking and VLAN presence. Don’t assume the network team “already did it.”

Task 24: See if the host is dropping packets due to firewall

cr0x@pve1:~$ pve-firewall status
Status: enabled/running

cr0x@pve1:~$ pve-firewall localnet
allow 10.0.0.0/8
allow 192.168.0.0/16

What it means: Firewall is on. Localnet rules exist; VM traffic rules are separate, but host firewall state matters during debugging.

Decision: If debugging connectivity, temporarily disable VM firewall on the VM NIC to isolate variables, then re-enable with correct rules.

Fast diagnosis playbook (find the bottleneck fast)

When a conversion is slow or the VM boots but performs like it’s running on a toaster, don’t start “tuning.”
Start measuring. The goal is to decide whether you’re limited by disk, CPU, network, or guest drivers.

First: is the conversion/boot blocked on storage?

  • Check ZFS/LVM/Ceph health and current I/O.
  • Look for small random writes caused by QCOW2 on top of COW storage.
  • Check if you’re accidentally converting a sparse disk into a fully allocated one on slow storage.

Task 25: Identify storage latency and saturation (host-level)

cr0x@pve1:~$ iostat -xm 2 3
Linux 6.8.12-4-pve (pve1) 	12/28/2025 	_x86_64_	(16 CPU)

Device            r/s     w/s   rkB/s   wkB/s  avgrq-sz avgqu-sz   await  %util
sda              2.1   210.5    92.4  105432.0   998.3     2.41   11.5   92.3
sdb              2.0   209.8    88.7  105120.4   995.1     2.38   11.3   91.9

What it means: ~92% utilization and ~11ms await. Storage is busy; conversion will be limited by disk.

Decision: Throttle concurrency (one conversion at a time), convert to local NVMe first, or schedule during low I/O windows.

Second: are you CPU-bound during conversion (compression, checksums, encryption)?

Task 26: Check CPU saturation during qemu-img convert

cr0x@pve1:~$ mpstat -P ALL 2 2
Linux 6.8.12-4-pve (pve1) 	12/28/2025 	_x86_64_	(16 CPU)

12:02:10 PM  CPU   %usr  %nice  %sys  %iowait  %irq  %soft  %idle
12:02:12 PM  all   42.1   0.0   8.3    18.7    0.0   0.7   30.2

What it means: Significant iowait; not purely CPU-bound. CPU has headroom but storage is gating.

Decision: Focus on storage path, not CPU tuning.

Third: if the VM runs poorly, is it drivers (VirtIO) or storage layout?

  • Windows without VirtIO drivers will limp on IDE/SATA and E1000, and you’ll think Proxmox is slow. It’s not. You are.
  • Linux with wrong IO scheduler or missing TRIM on SSD-backed thin storage can degrade over time.

Task 27: Confirm the VM is using the intended disk cache and discard settings

cr0x@pve1:~$ qm config 120 | grep -E 'scsi0|sata0|cache|discard|iothread'
scsi0: local-zfs:vm-120-disk-0

What it means: No explicit cache/discard settings shown; defaults apply.

Decision: For SSD-backed storage with thin provisioning, consider enabling discard=on and an IO thread for heavy disks after confirming guest TRIM support.

Common mistakes: symptoms → root cause → fix

1) VM won’t boot: “No bootable device”

Symptom: Proxmox console shows UEFI shell or BIOS boot failure.

Root cause: BIOS/UEFI mismatch (source was BIOS, destination is OVMF, or vice versa). Or boot order points to the wrong disk.

Fix: Match firmware to source. Set correct boot order. If needed, add EFI disk for UEFI guests and verify the boot entry.

2) Windows bluescreens early (INACCESSIBLE_BOOT_DEVICE)

Symptom: Windows BSOD right after the boot logo.

Root cause: Storage controller changed to VirtIO without drivers present.

Fix: Revert to SATA temporarily, boot, install VirtIO storage driver from ISO, then switch to VirtIO SCSI.

3) VM boots but network is dead

Symptom: Guest has no IP, can’t ping gateway, or DHCP doesn’t respond.

Root cause: VLAN tag missing/wrong, bridge mismatch, MAC changed and DHCP reservations reject it, or guest OS renamed interface (Linux).

Fix: Confirm bridge and tag in qm config. Confirm VLAN trunking. Keep MAC same if licensing/DHCP expects it. Fix guest interface config.

4) Conversion is painfully slow

Symptom: qemu-img convert crawls, host feels sluggish.

Root cause: Storage bottleneck, conversion to fully allocated RAW on slow disks, or competing I/O (other VMs, scrub, backups).

Fix: Run one conversion at a time; move conversion workload to fast local SSD/NVMe; avoid QCOW2-on-ZFS if you don’t need it.

5) VM is fast for a day, then performance degrades

Symptom: Latency increases, especially on write-heavy workloads.

Root cause: Thin provisioning without discard/TRIM, snapshots accumulating, or guest write cache interacting poorly with storage sync settings.

Fix: Enable discard where appropriate, keep snapshot hygiene, and align caching policy with your storage durability model.

6) Disk appears “bigger” or “smaller” than expected

Symptom: Guest sees wrong capacity or filesystem errors.

Root cause: Wrong disk chosen in multi-disk VM, conversion of delta instead of base, or mismatched sector sizes/geometry assumptions in old OSes.

Fix: Verify mapping of each VMDK to each Proxmox disk. Consolidate snapshots. For ancient guests, prefer Clonezilla or keep conservative controllers.

7) Imported disk consumes full size on ZFS even when thin on ESXi

Symptom: 2TB thin disk becomes a scary allocation event.

Root cause: You converted to RAW in a way that wrote zeroed blocks, or you imported onto a storage type that doesn’t preserve sparseness the way you assumed.

Fix: Use sparse-aware conversions and verify actual used blocks. For large sparse disks, consider QCOW2 on dir storage or careful RAW import into thin-provisioned zvols.

Checklists / step-by-step plan

Plan A (recommended): qemu-img conversion with controlled hardware changes

  1. Inventory source VM: firmware (BIOS/UEFI), disks, controllers, NIC type, snapshots.
  2. Consolidate snapshots: don’t migrate delta chains unless you enjoy ambiguity.
  3. Shut down VM: consistency beats cleverness.
  4. Copy VMDKs to conversion host: ideally the Proxmox node or a nearby staging box.
  5. Inspect with qemu-img: confirm format and detect backing files.
  6. Convert to RAW (ZFS/Ceph) or QCOW2 (dir): pick format based on backend, not vibes.
  7. Create VM shell in Proxmox: match firmware, start with compatible devices if Windows.
  8. Import disk: qm importdisk when possible; attach as desired bus.
  9. First boot in “safe mode” hardware: E1000 + SATA if drivers are unknown.
  10. Install VirtIO drivers: then switch NIC to VirtIO, then switch disk to VirtIO SCSI.
  11. Validation: reboot test, service health, application checks, backup run.
  12. Cutover: DNS/IP changes, monitoring updates, decommission only after stable.

Plan B: OVF/OVA when you need a portable artifact

  1. Export OVA/OVF from vCenter/ESXi.
  2. Verify checksums in the manifest file.
  3. Extract disks and inspect with qemu-img info.
  4. Convert to RAW/QCOW2; don’t try to “just attach” streamOptimized VMDKs.
  5. Create Proxmox VM with matching firmware, attach disk, fix devices/drivers.

Plan C: Clonezilla for appliances and weirdness

  1. Create destination VM with conservative devices (BIOS + SATA + E1000).
  2. Boot Clonezilla and clone from source disk image/share to destination disk.
  3. Boot destination, validate application health.
  4. Only then attempt VirtIO improvements if supported.

Three corporate mini-stories from the trenches

Mini-story 1: The incident caused by a wrong assumption

A finance-adjacent team migrated a handful of Windows application servers from ESXi to Proxmox. They did what looked reasonable:
convert disks, create VMs, and “modernize” by switching everything to VirtIO immediately. The first boot didn’t work, so they toggled a few settings,
tried again, and eventually got one server to boot. They declared victory and scheduled the rest.

Cutover night came. Three VMs boot-looped with INACCESSIBLE_BOOT_DEVICE. The one that booted had no network because they also flipped NICs to VirtIO,
and the OS didn’t have the driver. Their assumption was simple: “Windows will discover drivers like Linux does.” It will not.

The immediate fix was messy but effective: roll storage back to SATA for boot, attach the VirtIO driver ISO, install drivers in the guest, then migrate
controllers one at a time. The bigger fix was cultural: treat device changes like schema migrations. Stage them, verify, and only then move to the next step.

The postmortem was short and useful: they didn’t have a standard “first boot hardware profile.” After that, every Windows V2V started with SATA + E1000,
then graduated to VirtIO once drivers were confirmed. It was slower. It was also repeatable. Production likes repeatable.

Mini-story 2: The optimization that backfired

Another org tried to be clever about storage. They were migrating a fleet of Linux VMs and decided QCOW2 everywhere because “snapshots are nice.”
Their Proxmox storage was ZFS. They had effectively stacked copy-on-write on top of copy-on-write. It worked in testing, because tests were short and polite.
Production was neither.

The first sign was elevated write latency during peak hours. Databases complained. CI runners slowed down. Then backup windows started slipping.
The team responded with more tuning: cache modes, queue depths, and a handful of sysctl changes. Some changes helped briefly, mostly by moving pain around.

Eventually they did the boring measurement: host iostat, ZFS iostat, and VM-level latency. The pattern was clear—random write amplification and metadata churn.
QCOW2 was doing its job; ZFS was also doing its job; together they were doing too much job.

They migrated hot VMs to RAW on zvols and reserved QCOW2 for a few cases where the portability mattered more than performance. Write latency normalized.
The lesson wasn’t “QCOW2 is bad.” The lesson was: don’t double-stack features unless you’re willing to pay for them in latency and complexity.

Mini-story 3: The boring but correct practice that saved the day

A healthcare-adjacent company had to migrate a mixed bag of VMs under tight change windows. They weren’t fancy. They were disciplined.
Every VM got the same pre-flight checklist: snapshot status, shutdown verification, export checksum validation, and a documented rollback step.
Every migrated VM had to pass “two reboots and one backup” before the ESXi copy could be removed.

Midway through the project, a storage switch began dropping packets intermittently on the migration network. Nothing fully “failed.”
Instead, a few OVA transfers arrived corrupted. The team didn’t notice immediately—until the manifest verification started failing.
That single boring step prevented them from importing corrupted disks and then debugging phantom filesystem errors.

They re-transferred the affected exports after isolating the network path. The migration schedule slipped a bit, but production stayed stable.
Nobody had to invent a new ritual. They just followed the checklist and let it catch the problem early.

If you’re looking for the moral: checksums are cheaper than incident bridges. Also, the most “enterprise” tool in the room is often a text file
where you consistently write down what you did.

FAQ

1) Can Proxmox directly import an OVF and recreate the VM hardware?

Not in the “like vSphere does” sense. You usually create the VM shell yourself and import/convert disks. Expect to map devices manually.

2) Should I use QCOW2 or RAW on Proxmox?

RAW on ZFS and Ceph is usually the right choice. QCOW2 is fine on directory storage when you want image-level snapshots or portability.
Don’t stack QCOW2 on top of ZFS unless you know why you’re paying that overhead.

3) What’s the safest NIC model for first boot?

E1000 is the common compatibility choice, especially for Windows. After drivers are installed, switch to VirtIO for performance.

4) What’s the safest disk controller for first boot on Windows?

SATA. Boot with SATA, install VirtIO drivers, then switch the disk to VirtIO SCSI (one change at a time).

5) My ESXi VM used UEFI. What do I do on Proxmox?

Use OVMF (--bios ovmf) and typically add an EFI disk if needed. Keep the boot mode consistent or you’ll land in an EFI shell.

6) Can I migrate a VM with snapshots?

You can, but you probably shouldn’t. Consolidate snapshots first so you convert a single coherent disk. Snapshot chains are where conversions get “creative.”

7) Why does my imported disk consume more space than the thin VMDK?

You may have converted in a way that wrote blocks that were previously sparse, or your target storage allocates differently.
Verify sparseness expectations and measure real allocation on the destination.

8) How do I know if the bottleneck is network, disk, or CPU during conversion?

Measure on the Proxmox node: iostat for disk saturation, mpstat for iowait vs CPU, and storage-specific tools (like zpool iostat).
Don’t “tune” until you know what’s saturated.

9) Do MAC addresses need to be preserved?

Sometimes. DHCP reservations, licensing systems, and firewall policies might key off MAC. If you don’t know, preserve MACs and change them later with intent.

Next steps you can actually do

  1. Pick your method per VM: default to qemu-img; use OVA when you need an artifact; use Clonezilla for appliances and weird bootloaders.
  2. Standardize a “first boot profile”: BIOS/UEFI match, SATA + E1000 for Windows if drivers aren’t staged, then graduate to VirtIO.
  3. Make conversion measurable: run iostat/mpstat/zpool iostat during the first few conversions and write down what “normal” looks like.
  4. Require two reboots and one backup before you decommission the ESXi original. If you skip this, the universe will teach you why it exists.
  5. Document device mappings (disk1 → scsi0, disk2 → scsi1, VLAN tags, MACs). Migration failures love ambiguity.

If you do the boring parts—snapshot hygiene, firmware matching, driver staging, checksum verification—most V2V conversions become routine.
Not glamorous. Predictable. Which is the nicest thing you can say about production changes.

← Previous
Callout Blocks with Icons: Inline SVG + CSS Variables (No Icon Fonts)
Next →
NotPetya: when “malware” behaved like a sledgehammer

Leave a comment