You don’t need vCenter to move a VM. You need a plan, a few command-line tools, and the discipline to not “just try it and see” on the only copy of production. The pain point is always the same: you exported something from ESXi, Proxmox imported something else, and now the guest boots to a blinking cursor or a network card that doesn’t exist.
This is the field guide for doing it cleanly: export OVF/OVA from a standalone ESXi host, import into Proxmox, and fix the predictable breakage—disk controllers, NIC models, boot mode, and drivers—without turning the migration into a weekend hostage situation.
Interesting facts and context (the stuff that explains the weirdness)
- OVF is a packaging spec, not magic. OVF describes the VM (CPU, RAM, devices). The disks are usually separate VMDK files or bundled into an OVA (a tar archive).
- OVA is just tar. If you can’t import an OVA, you can almost always extract it with standard tools and work with the VMDK directly.
- VMDK has flavors. ESXi commonly uses “monolithicSparse” or “streamOptimized” in exports. Some converters and importers hate some subtypes. “Stream optimized” shows up often in OVF/OVA exports.
- VMXNET3 and PVSCSI are VMware-specific performance devices. They’re great on ESXi, but Proxmox/QEMU won’t emulate them natively. You’ll switch to VirtIO or Intel E1000, and that has driver consequences.
- VirtIO is fast because it’s paravirtualized. The guest OS needs drivers. Linux usually has them already; Windows often doesn’t (unless it’s modern and already had VirtIO installed).
- UEFI vs BIOS is the silent assassin. ESXi VMs can be BIOS or EFI; Proxmox can do both, but you must match. Booting a BIOS-installed OS with UEFI firmware is a quick way to meet “no bootable device”.
- Snapshots complicate exports. If the VM has snapshots, the “current disk” is a chain of delta files. Export tools sometimes consolidate; sometimes they don’t. Always verify what you exported.
- Thin vs thick affects time and storage. A “thin” VMDK may convert to a “thick” raw image if you’re not careful, ballooning storage and migration time.
- Proxmox uses QEMU device models. The “same” VM on a new hypervisor is still different hardware. Operating systems are picky toddlers about their disk controller identities.
Decisions that matter before you touch anything
1) Decide whether you can tolerate downtime
If you don’t have shared storage replication or a block-level migration tool, an OVF/OVA export is a cold migration by default. You can do a semi-warm approach (take an outage window, export, import, test, and cut over), but don’t pretend it’s live migration. Your business stakeholders will interpret “migration” as “zero downtime,” because optimism is cheaper than engineering.
2) Decide the target disk format: raw vs qcow2
On Proxmox, you’ll usually land disks on:
- ZFS: raw zvol is common; qcow2 is possible but often pointless. Raw is simpler and faster for most workloads.
- LVM-thin: raw is typical (thin-provisioned at the storage layer).
- Directory storage (ext4/xfs): qcow2 is convenient (snapshots), raw is fine (simplicity).
My opinion: if your Proxmox storage is ZFS or LVM-thin, prefer raw. If it’s a directory and you need snapshots, use qcow2. Don’t choose qcow2 because it “sounds virtual.” Choose it because it matches your snapshot and performance needs.
3) Decide the target device models: VirtIO where possible
For Linux guests: VirtIO for disk and NIC almost always works. For Windows: VirtIO is still the right answer, but you must plan driver installation. If this is a critical Windows VM and you want the migration to be boring, you can boot initially with SATA + E1000, then switch to VirtIO after drivers are in.
4) Decide BIOS vs UEFI now
Match the guest’s installed boot mode. If the ESXi VM boots via EFI, configure Proxmox firmware as OVMF (UEFI). If it’s BIOS, use SeaBIOS. You can convert later, but that’s a separate project with real failure modes.
Paraphrased idea from Gene Kranz: “Tough and competent” is a choice you make before the failure, not during it.
Fast diagnosis playbook (find the bottleneck in minutes)
This is what you do when the imported VM won’t boot, won’t see its disk, or has no network. Don’t thrash. Walk the stack.
First: Confirm the VM actually has a bootable disk attached
- Check Proxmox VM hardware: disk present, correct bus (SATA/SCSI/VirtIO), correct boot order.
- Check the storage: does the disk image exist where Proxmox thinks it does?
Second: Confirm firmware matches installation (UEFI vs BIOS)
- BIOS install + UEFI firmware = “no bootable device” theater.
- UEFI install + BIOS firmware = same, but with extra confusion.
- UEFI often requires an EFI disk in Proxmox (small separate device for NVRAM).
Third: Confirm driver availability for the chosen disk controller and NIC
- If Windows blue-screens early (INACCESSIBLE_BOOT_DEVICE): disk controller driver missing. Boot with SATA, install VirtIO, switch later.
- If Linux drops to initramfs: wrong root device mapping or initramfs missing drivers for the new controller; rebuild initramfs and/or adjust fstab/GRUB.
Fourth: Confirm networking is mapped correctly
- Bridge attached? VLAN tag correct? Firewall in Proxmox blocking?
- Guest OS: NIC name changed (Linux), interface metric weird (Windows), or static IP bound to old adapter.
Fifth: Performance problems after boot
- Check if you accidentally used IDE emulation or E1000 for high-throughput workloads.
- Check cache mode and iothreads. Check storage latency on the Proxmox node. It’s rarely “Proxmox is slow” and usually “your storage path is different now.”
Practical tasks with commands, output meaning, and decisions
These are runnable commands and what you do with the results. The goal is to replace guesswork with evidence.
Task 1: Identify the VM’s boot mode and controller types on ESXi
On the ESXi host (SSH enabled), find the VMX file and inspect it.
cr0x@server:~$ vim-cmd vmsvc/getallvms
Vmid Name File Guest OS Version Annotation
12 app-prod-01 [datastore1] app-prod-01/app-prod-01.vmx ubuntu64Guest vmx-14
Meaning: You have the datastore path to the VMX. You can now check firmware and device types.
Decision: If this VM is mission-critical, snapshot/consolidation strategy matters before export.
cr0x@server:~$ grep -E 'firmware|scsi|sata|nvme|ethernet|virtualDev' /vmfs/volumes/datastore1/app-prod-01/app-prod-01.vmx
firmware = "efi"
scsi0.virtualDev = "pvscsi"
ethernet0.virtualDev = "vmxnet3"
Meaning: EFI boot, PVSCSI disk controller, VMXNET3 NIC. None of those map 1:1 to Proxmox defaults.
Decision: Configure Proxmox VM with OVMF (UEFI). Plan to switch controllers to VirtIO (or temporarily SATA/E1000 for Windows).
Task 2: Check for snapshots (and don’t ignore them)
cr0x@server:~$ vim-cmd vmsvc/snapshot.get 12
Get Snapshot:
|-ROOT
|--Snapshot Name : pre-patch
|--Snapshot Id : 1
|--Snapshot Created On : 2025-11-02
|--Snapshot State : poweredOff
Meaning: Snapshots exist. Export behavior depends on tooling; you may export the chain or a consolidated image.
Decision: Prefer consolidating snapshots before export unless you have a specific reason not to. Exports are not where you want clever snapshot chains.
Task 3: Export without vCenter (practical options)
Without vCenter, you typically export by copying files from the datastore or using an OVF tool from a workstation that can reach the ESXi management IP.
If you already have an OVA/OVF export, skip ahead. If not, copying the VMDK(s) directly is often simplest.
cr0x@server:~$ ls -lh /vmfs/volumes/datastore1/app-prod-01/
total 22G
-rw------- 1 root root 8.5K Nov 2 09:10 app-prod-01.vmx
-rw------- 1 root root 2.0K Nov 2 09:10 app-prod-01.nvram
-rw------- 1 root root 512B Nov 2 09:10 app-prod-01.vmsd
-rw------- 1 root root 30G Nov 2 09:11 app-prod-01-flat.vmdk
-rw------- 1 root root 1.2K Nov 2 09:11 app-prod-01.vmdk
Meaning: You have a descriptor VMDK and a flat extent. That’s a common ESXi layout.
Decision: Copy both the .vmdk and -flat.vmdk if you’re doing a raw file copy. If you only copy the descriptor, you’ve copied a label, not the disk.
Task 4: Verify what kind of VMDK you’re holding
cr0x@server:~$ head -n 20 /vmfs/volumes/datastore1/app-prod-01/app-prod-01.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=fffffffe
parentCID=ffffffff
createType="vmfs"
Meaning: This is a VMFS-style descriptor pointing at the flat extent. Good.
Decision: Conversion with qemu-img should work. If createType is streamOptimized, expect different behavior and possibly an intermediate conversion step.
Task 5: On Proxmox, confirm storage targets and free space
cr0x@server:~$ pvesm status
Name Type Status Total Used Available %
local dir active 19619632 2231040 16252232 11.37%
local-lvm lvmthin active 468582400 89366528 379215872 19.07%
zfspool zfspool active 950000000 410000000 540000000 43.16%
Meaning: You have multiple storage backends. Space is available, but not infinite.
Decision: Choose the target storage based on performance and snapshot needs. Don’t drop a database VM onto slow directory storage because it’s “easy.”
Task 6: Extract an OVA (if that’s what you have)
cr0x@server:~$ mkdir -p /var/tmp/ova/app-prod-01 && tar -xvf app-prod-01.ova -C /var/tmp/ova/app-prod-01
app-prod-01.ovf
app-prod-01.mf
app-prod-01-disk1.vmdk
Meaning: You now have the OVF descriptor and the VMDK(s).
Decision: Ignore the manifest file if you must, but if you’re in a regulated environment, checksum validation is not optional theater—it’s the audit trail.
Task 7: Inspect the OVF for device expectations
cr0x@server:~$ grep -nE 'OperatingSystemSection|VirtualSystemType|ResourceType|rasd:ResourceSubType' /var/tmp/ova/app-prod-01/app-prod-01.ovf | head
25: <OperatingSystemSection ovf:id="94">
44: <vssd:VirtualSystemType>vmx-14</vssd:VirtualSystemType>
120: <rasd:ResourceType>17</rasd:ResourceType>
121: <rasd:ResourceSubType>vmxnet3</rasd:ResourceSubType>
Meaning: It’s describing VMware hardware types (vmxnet3). Proxmox won’t honor that device model.
Decision: Treat OVF as metadata, not a promise. You’ll define the target hardware explicitly in Proxmox.
Task 8: Convert VMDK to raw or qcow2 on Proxmox
Pick raw for ZFS/LVM-thin targets; qcow2 for directory targets with snapshots.
cr0x@server:~$ qemu-img info /var/tmp/ova/app-prod-01/app-prod-01-disk1.vmdk
image: /var/tmp/ova/app-prod-01/app-prod-01-disk1.vmdk
file format: vmdk
virtual size: 30 GiB (32212254720 bytes)
disk size: 8.2 GiB
cluster_size: 65536
Format specific information:
cid: 12345678
create type: streamOptimized
Meaning: streamOptimized disk. Conversion works, but be mindful of sparse behavior.
Decision: Convert with qemu-img convert. If you see corruption or failures, convert to raw first, then import.
cr0x@server:~$ qemu-img convert -p -f vmdk -O raw /var/tmp/ova/app-prod-01/app-prod-01-disk1.vmdk /var/tmp/app-prod-01-disk1.raw
(100.00/100%)
Meaning: You have a raw disk image ready to import. Progress hitting 100% is necessary, not sufficient—still validate boot later.
Decision: If your storage backend supports thin provisioning at the block layer (ZFS zvol, LVM-thin), import raw into that backend.
Task 9: Create a Proxmox VM shell (don’t attach disks yet)
cr0x@server:~$ qm create 120 --name app-prod-01 --memory 8192 --cores 4 --cpu host --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-pci --ostype l26 --machine q35
create VM 120: success
Meaning: VM exists with VirtIO NIC and VirtIO SCSI controller. q35 machine type is modern and plays well with UEFI.
Decision: If the source is UEFI, set OVMF now. If BIOS, keep SeaBIOS.
Task 10: Configure UEFI (OVMF) and add an EFI disk when needed
cr0x@server:~$ qm set 120 --bios ovmf --efidisk0 zfspool:0,pre-enrolled-keys=0
update VM 120: -bios ovmf -efidisk0 zfspool:0,pre-enrolled-keys=0
Meaning: VM will boot with UEFI firmware and has an EFI vars disk.
Decision: If Secure Boot wasn’t used on ESXi, keep pre-enrolled-keys=0 to avoid signing drama.
Task 11: Import the disk into Proxmox-managed storage
cr0x@server:~$ qm importdisk 120 /var/tmp/app-prod-01-disk1.raw zfspool --format raw
importing disk '/var/tmp/app-prod-01-disk1.raw' to VM 120 ...
transferred 32212254720 bytes in 94 seconds (342.7 MiB/s)
Successfully imported disk as 'zfspool:vm-120-disk-0'
Meaning: Disk is now a storage-managed volume. This is what you want for backups, replication, and sanity.
Decision: Attach the imported disk on the right bus. For Linux, VirtIO SCSI is usually fine. For Windows without VirtIO drivers, start with SATA.
Task 12: Attach disk and set boot order
cr0x@server:~$ qm set 120 --scsi0 zfspool:vm-120-disk-0,discard=on,ssd=1 --boot order=scsi0
update VM 120: -scsi0 zfspool:vm-120-disk-0,discard=on,ssd=1 -boot order=scsi0
Meaning: Disk attached as SCSI (VirtIO SCSI controller) and set as the first boot device.
Decision: If boot fails and it’s a Windows guest, switch to SATA temporarily and install VirtIO drivers.
Task 13: Start the VM and watch the console like you mean it
cr0x@server:~$ qm start 120
cr0x@server:~$ qm terminal 120
starting serial terminal on interface serial0 (press Ctrl+O to exit)
[ 0.000000] Linux version 5.15.0-...
[ 3.214567] virtio_scsi 0000:00:05.0: 1/0/0 default/read/poll queues
...
Meaning: If you see virtio drivers initializing, you’re on a good path. If it drops into initramfs, you need root device and initramfs fixes.
Decision: Don’t “wait and see” for 20 minutes. Boot failures don’t heal with time.
Task 14: Validate disk visibility inside Linux
cr0x@server:~$ lsblk -o NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT
NAME SIZE TYPE FSTYPE MOUNTPOINT
sda 30G disk
├─sda1 512M part vfat /boot/efi
├─sda2 1G part ext4 /boot
└─sda3 28.5G part ext4 /
Meaning: Partitions present and mounted. The OS sees the disk.
Decision: If device names changed (e.g., from /dev/sda to /dev/vda), confirm /etc/fstab uses UUIDs, not device names.
Task 15: Validate network device and link state
cr0x@server:~$ ip -br link
lo UNKNOWN 00:00:00:00:00:00
ens18 UP 52:54:00:12:34:56
Meaning: Interface is up. Name likely changed versus ESXi. That’s normal.
Decision: If the interface name changed and your network config references the old name, update netplan/systemd-networkd/ifcfg accordingly.
Task 16: On Proxmox, check bridge and VLAN tagging
cr0x@server:~$ cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.0.2.10/24
gateway 192.0.2.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
Meaning: Standard Linux bridge. VLAN tags can be set per VM NIC or via VLAN-aware bridge config.
Decision: If you need VLAN trunking, set bridge-vlan-aware yes and configure tags intentionally. Don’t guess; your switch won’t forgive you.
Task 17: Confirm Proxmox sees the VM config you intended
cr0x@server:~$ qm config 120
bios: ovmf
boot: order=scsi0
cores: 4
cpu: host
efidisk0: zfspool:vm-120-disk-1,size=4M,pre-enrolled-keys=0
memory: 8192
name: app-prod-01
net0: virtio=52:54:00:12:34:56,bridge=vmbr0
ostype: l26
scsi0: zfspool:vm-120-disk-0,discard=on,ssd=1
scsihw: virtio-scsi-pci
machine: q35
Meaning: This is the canonical truth. If the GUI differs, the config still wins.
Decision: Keep this output in your change record. It’s how you later prove what changed.
Joke #1: OVF is like a shipping manifest: it tells you what should be in the box, not whether the box survived the forklift.
Checklists / step-by-step plan (boringly correct)
Pre-migration checklist (source ESXi)
- Document VM hardware: firmware (EFI/BIOS), disk controller, NIC type, number of disks, IP config.
- Clean snapshots: consolidate or at least understand the chain. Exporting a snapshot chain is where data goes to get lost.
- Quiesce the guest: application-level shutdown if possible (databases, queues). If you can’t, at least shut down the VM cleanly.
- Collect drivers: for Windows, prepare VirtIO driver ISO. For Linux, ensure initramfs tooling exists inside the guest.
- Decide cutover method: new IP or same IP? DNS changes? MAC reservations? Firewall rules?
Migration execution checklist (Proxmox)
- Create a VM shell with CPU/RAM close to source. Don’t overfit yet.
- Set firmware (OVMF vs SeaBIOS) to match source.
- Import disk into the chosen storage backend.
- Attach disk with a sane controller (VirtIO SCSI for Linux; SATA first for Windows if drivers missing).
- Set boot order explicitly.
- Boot with console open; capture errors early.
- Fix boot/driver issues (see sections below).
- Fix network mapping, confirm connectivity, confirm hostname/DNS.
- Remove VMware Tools if appropriate, install QEMU guest agent.
- Run workload validation: disk I/O, app checks, logs, monitoring.
Post-migration checklist (stability and operability)
- Backups: ensure Proxmox backup jobs include the VM and that a test restore works.
- Monitoring: confirm the VM is in the right alerts, with correct agent endpoints.
- Time sync: verify NTP/chrony; don’t rely on hypervisor time magic.
- Performance sanity: confirm you’re using VirtIO and not IDE. Check I/O scheduler and discard/TRIM settings if applicable.
- Change record: store the “before” and “after” VM hardware config and network details.
Fixing drivers and boot: Windows and Linux
Windows: the safe path (boot first, optimize second)
If Windows was on ESXi with PVSCSI and VMXNET3, it won’t automatically know what to do with VirtIO. The failure mode is usually a BSOD at boot: INACCESSIBLE_BOOT_DEVICE. That’s Windows saying: “Nice new controller. I don’t speak that dialect.”
Recommended sequence for Windows
- Import disk and attach as SATA (or even IDE if you must, but SATA is less awful).
- NIC model: use E1000 or VirtIO if you already have the driver installed. If you need initial connectivity without drivers, E1000 is the training wheels.
- Boot Windows successfully.
- Mount the VirtIO driver ISO and install storage + network drivers.
- Switch disk controller to VirtIO SCSI (or VirtIO block), reboot, verify.
- Switch NIC to VirtIO, reboot, verify.
In Proxmox terms, that often looks like: start with --sata0 and e1000, then move to --scsi0 plus VirtIO NIC.
cr0x@server:~$ qm set 130 --name win-app-01 --memory 16384 --cores 6 --cpu host --net0 e1000,bridge=vmbr0 --sata0 zfspool:vm-130-disk-0 --boot order=sata0
update VM 130: -name win-app-01 -memory 16384 -cores 6 -cpu host -net0 e1000,bridge=vmbr0 -sata0 zfspool:vm-130-disk-0 -boot order=sata0
Meaning: You chose compatibility. It boots first, then you modernize.
Decision: If the VM is a domain controller or has strict NIC bindings, plan the adapter transition carefully to avoid losing network identity.
Linux: usually easy, until it isn’t
Linux tends to boot on VirtIO without drama. When it doesn’t, the reasons are consistent:
- fstab references /dev/sdX instead of UUID/LABEL and the device name changed.
- initramfs missing modules for the new controller (less common on modern distros, still happens on minimal images).
- boot mode mismatch (UEFI vs BIOS).
Fix: confirm fstab uses UUIDs
cr0x@server:~$ grep -vE '^\s*#|^\s*$' /etc/fstab
UUID=3d2f7a2f-aaaa-bbbb-cccc-111122223333 / ext4 defaults 0 1
UUID=9f8e7d6c-aaaa-bbbb-cccc-444455556666 /boot ext4 defaults 0 2
UUID=1A2B-3C4D /boot/efi vfat umask=0077 0 1
Meaning: UUID-based mounts. Good. Device renaming won’t break boot.
Decision: If you see /dev/sda1 style mounts, fix them before you reboot again.
Fix: rebuild initramfs (Debian/Ubuntu example)
cr0x@server:~$ update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-5.15.0-92-generic
Meaning: initramfs refreshed; it will include currently-needed modules.
Decision: If the VM only boots with one controller type, rebuild initramfs while it’s in the working state, then switch controllers.
Fix: GRUB reinstall after a firmware mismatch or disk mapping change
For BIOS installs on a new virtual controller, reinstalling GRUB can be the simplest hammer.
cr0x@server:~$ grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.
cr0x@server:~$ update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.15.0-92-generic
done
Meaning: Bootloader is re-established for BIOS mode on that disk.
Decision: If you’re actually UEFI, don’t do BIOS grub-install. You’ll just create a second boot confusion layer.
Joke #2: UEFI is great until it isn’t, at which point it becomes interpretive dance performed by firmware.
Networking: bridges, VLANs, and MAC surprises
Bridge selection: attach the VM to the right world
Proxmox networking is Linux networking. That’s a compliment and a warning.
- vmbr0 is typically your LAN bridge.
- Tagging VLANs can be done per-NIC in Proxmox (
tag=123) or by using VLAN-aware bridges. - Bonding/LACP is configured on the host; the VM just sees a NIC.
cr0x@server:~$ qm set 120 --net0 virtio=52:54:00:12:34:56,bridge=vmbr0,tag=120,firewall=1
update VM 120: -net0 virtio=52:54:00:12:34:56,bridge=vmbr0,tag=120,firewall=1
Meaning: VM NIC is on VLAN 120, firewall enabled at Proxmox layer.
Decision: If you enable Proxmox firewall, define rules intentionally. Otherwise you’ll “migrate” the VM into a self-imposed air gap.
MAC address and licensing
Some software binds licenses to MAC addresses. Some security stacks bind trust to them too. Proxmox will generate a new MAC unless you set one.
cr0x@server:~$ qm config 120 | grep -E '^net0'
net0: virtio=52:54:00:12:34:56,bridge=vmbr0
Meaning: You have a stable MAC configured in the VM config.
Decision: If you need to preserve the ESXi MAC, set it explicitly here before first boot to avoid “new NIC” behavior in the guest.
Linux NIC name changes and persistent rules
On ESXi, your NIC might have been ens160. On Proxmox, it could be ens18 or enp0s18. If your distro uses netplan or systemd-networkd, rename logic can bite you.
cr0x@server:~$ journalctl -b | grep -iE 'rename|ens|enp' | head
systemd-udevd[312]: renamed network interface eth0 to ens18
systemd-networkd[401]: ens18: Link UP
Meaning: Rename occurred; systemd sees the interface.
Decision: Update your network config to match the new name, or pin naming using match rules by MAC address.
Three corporate mini-stories (how this goes wrong in real life)
Mini-story 1: An incident caused by a wrong assumption
The company had a standalone ESXi host running a handful of “temporary” VMs. You already know where this is going: the temporary VMs had been powering revenue for years, like a forgotten extension cord behind a cabinet.
Someone finally bought a Proxmox cluster. The mandate was simple: migrate the most critical VM over a weekend, no vCenter involved, and don’t spend money. The engineer exported an OVA from ESXi, imported it, and the VM booted. That was the moment everyone relaxed, which is historically the most dangerous moment in operations.
Monday morning, users reported intermittent failures. The application was “up,” but half the requests timed out. The root cause wasn’t CPU or memory. It was networking: the VM had two NICs on ESXi—one for frontend, one for backend—and the OVA metadata didn’t map device roles. On Proxmox, the NIC order swapped, the OS kept its static IP assignments, and the app tried to talk to backend services through the frontend VLAN.
It got worse: monitoring checks were green because the health endpoint lived on the frontend interface. The backend failures only showed up in business transactions. The assumption was “if it boots and pings, it’s fine.” That assumption paid for an outage.
The fix was boring: map NICs intentionally (bridge + VLAN tag), set MACs, verify routing tables, and validate the application’s real dependency graph—not just ICMP. The lesson stuck because it was painful and public.
Mini-story 2: An optimization that backfired
A different team treated migration as a performance opportunity. They converted every disk to qcow2 because “snapshots are useful” and enabled every caching knob that sounded fast. They also flipped Windows VMs straight to VirtIO for disk and NIC on first boot, because they had done it once in a lab.
The results were impressive in the same way a fireworks factory can be “impressive.” Some VMs didn’t boot (driver issue). Others booted but had odd latency spikes under load. The team chased ghosts in the guest OS for days—antivirus, Windows updates, “maybe SQL is doing something.”
The actual problem was a stack of small choices: qcow2 on top of a directory storage sitting on a RAID controller with volatile cache settings, plus aggressive writeback caching in QEMU, plus no UPS guarantees. It worked until it didn’t. Latency spiked during host flush events, and the risk profile was wrong for stateful workloads.
They rolled back to raw volumes on ZFS zvols for the databases, used conservative caching, and performance stabilized immediately. Snapshots moved to storage-native ZFS snapshots where they belonged. Optimization isn’t bad; unbounded optimization is bad.
Mini-story 3: A boring but correct practice that saved the day
A regulated enterprise had to migrate an ESXi VM that ran a line-of-business service with ancient dependencies and a vendor that had long since moved on. The environment had no vCenter. The ESXi host was a “pet,” not cattle, complete with local storage and a nervous facilities team.
The SRE on call insisted on a dull process: capture the VMX settings, record disk checksums after export, import into Proxmox with the same boot mode, and keep the old VM powered off but intact for a full business week. They also required a test boot on an isolated VLAN before production cutover.
During isolated testing, the VM booted but couldn’t reach its license server. The issue was not networking; it was time drift. The VM had been quietly relying on VMware Tools time sync, and Proxmox doesn’t provide that exact behavior. The vendor license handshake failed when the clock skew exceeded tolerance.
Because the team tested in isolation and had a rollback plan, they fixed NTP/chrony properly, documented the behavior, and then cut over cleanly. Nothing dramatic happened in production, which is the highest form of compliment an operations person can receive.
Common mistakes: symptom → root cause → fix
1) Symptom: “No bootable device” immediately on start
Root cause: BIOS/UEFI mismatch, missing EFI disk, or wrong boot order.
Fix: Match firmware to source (OVMF for EFI, SeaBIOS for BIOS). Add efidisk0 for OVMF. Set explicit boot order.
2) Symptom: Windows BSOD INACCESSIBLE_BOOT_DEVICE
Root cause: Storage controller changed to VirtIO without driver installed.
Fix: Attach disk as SATA, boot, install VirtIO storage drivers, then switch to VirtIO SCSI and reboot.
3) Symptom: Linux drops into initramfs or emergency shell
Root cause: Root filesystem not found due to device name changes, missing initramfs modules, or fstab pointing to /dev/sdX.
Fix: Use UUIDs in fstab, rebuild initramfs, verify GRUB config points to correct root.
4) Symptom: VM boots, but no network connectivity
Root cause: Wrong bridge/VLAN tag, Proxmox firewall rules, guest NIC naming changes, or Windows static IP tied to old adapter.
Fix: Verify bridge and VLAN tagging on host; temporarily disable Proxmox firewall to isolate; fix guest network config; preserve MAC if required.
5) Symptom: Disk is present in Proxmox but guest sees 0 bytes or corrupt filesystem
Root cause: Incomplete copy of VMDK extents, snapshot chain not consolidated, or conversion error from streamOptimized.
Fix: Re-export after consolidating snapshots; verify both descriptor and flat files copied; reconvert using raw intermediate; validate with checksums.
6) Symptom: VM is slow, especially I/O
Root cause: Using IDE/SATA for a workload that needs VirtIO, wrong cache mode, storage backend mismatch, or no iothread for busy disks.
Fix: Move disks to VirtIO SCSI with iothreads where appropriate, use storage-native volumes (ZFS/LVM-thin), review cache settings, measure latency on the host.
7) Symptom: Application works but licensing breaks
Root cause: MAC address changed, clock drift, or hardware fingerprint changed.
Fix: Preserve MAC where needed, fix NTP, document the new virtual hardware profile and engage vendor if unavoidable.
FAQ
1) Do I need vCenter to export an OVF/OVA?
No. vCenter makes it easier, but you can export by copying VM files from the ESXi datastore or using an OVF export tool against the ESXi host directly.
2) Is it better to export OVF/OVA or copy VMDKs?
If you can shut down cleanly, copying VMDK files is often simpler and more transparent. OVF/OVA is convenient for packaging, but it can hide disk format quirks (like streamOptimized).
3) Can Proxmox import OVF directly?
Not in a way you should bet production on. Treat OVF as a descriptor; extract disks and import them with qm importdisk after converting if needed.
4) Should I use qcow2 or raw on Proxmox?
Raw on ZFS zvols or LVM-thin is usually the right call for performance and simplicity. qcow2 is fine on directory storage when you want file-based snapshots.
5) My Linux VM boots on ESXi but fails on Proxmox with initramfs. Why?
Because the “hardware” changed: controller model, disk naming, or firmware. Fix fstab to use UUIDs, rebuild initramfs, and confirm BIOS/UEFI mode.
6) How do I handle Windows VirtIO drivers safely?
Boot Windows on SATA first, install VirtIO drivers from an attached ISO, then switch disk controller to VirtIO SCSI and NIC to VirtIO. One change at a time, with reboots in between.
7) Do I need to uninstall VMware Tools?
Usually yes, eventually. It can interfere with time sync and some device behaviors. Don’t make it your first action during migration day; stabilize boot and networking first.
8) What about guest agent support on Proxmox?
Install the QEMU guest agent in the VM after it’s stable. It improves shutdown behavior, IP reporting, and some backup workflows. It’s operational hygiene.
9) Can I keep the same IP address after migration?
Yes, but only after you’re sure the VM is on the correct VLAN/bridge and your cutover plan avoids duplicate IPs. If you can, test with a temporary IP first.
10) What’s the biggest “unknown unknown” in these migrations?
Dependency on VMware-specific behavior: time sync, NIC ordering, custom drivers, and snapshot assumptions. Audit those upfront, and the migration becomes routine.
Conclusion: next steps that actually reduce risk
If you take only three actions from this guide, take these:
- Match boot mode (UEFI vs BIOS) before first boot on Proxmox. It’s the cheapest win.
- Boot for compatibility first (especially Windows), then migrate to VirtIO once drivers are installed.
- Prove correctness with commands: storage presence, boot order, interface state, and logs. Your memory will lie under pressure.
Operationally, treat the migration as two deliverables: bootability and operability. Getting a login prompt is not the finish line. Backups, monitoring, time sync, and performance are what keep you from revisiting this VM at 3 a.m. with an audience.