Proxmox VM won’t start after changing CPU type: recovery steps that work

Was this helpful?

You changed the CPU type on a Proxmox VM because you wanted performance, consistency for migration, or you followed a well-meaning forum post. Now the VM won’t start, Proxmox throws “QEMU exited with code 1,” and you’re staring at a stopped workload that was perfectly fine five minutes ago.

This failure is common, usually fixable in minutes, and occasionally a sign you’ve been skating on thin ice for months. Let’s recover the VM safely, explain what actually broke, and set you up so you can change CPU types without rolling the dice next time.

Fast diagnosis playbook

If you only have ten minutes before someone asks why the dashboard is red, do this in order. The goal is to identify whether you’re dealing with a pure QEMU CPU model issue, a guest OS reaction, or a hardware/host mismatch.

First: read the actual QEMU error, not the GUI toast

  • Check the task log and journal for the VM start attempt.
  • Look specifically for phrases like unsupported CPUID, host doesn’t support requested feature, invalid CPU model, or kvm: failed to init.

Second: confirm the VM config now vs. what it used to be

  • Verify the cpu: line and any args: overrides.
  • If you have backups, compare the current config to the last-known-good.

Third: decide whether the fix is “revert CPU type” or “change to a safer CPU baseline”

  • If this VM must start now: revert to the previous CPU type (or host if it worked before).
  • If this VM must migrate across nodes: pick a baseline like x86-64-v2/x86-64-v3 (where supported) or a named model that exists across all your nodes.

Fourth: if it starts, validate the guest’s view of CPU and stability

  • For Windows: check activation/licensing implications and verify that critical services come up.
  • For Linux: check dmesg for CPU feature warnings, microcode messages, and clocksource changes.

One operational truth: a VM that won’t start is usually a configuration mismatch. A VM that starts but behaves weirdly is usually a CPU feature regression, timer/clocksource shift, or guest driver expectation problem.

What actually broke when you changed CPU type

In Proxmox, the “CPU type” setting is not cosmetic. It controls what virtual CPU model QEMU advertises to the guest and what KVM is asked to accelerate. That includes:

  • CPUID feature flags (SSE4.2, AVX, AVX2, AES-NI, BMI, FMA, etc.).
  • Instruction set level and quirks (some models imply groups of flags).
  • Topology exposure (sockets/cores/threads) and occasionally cache/timing behavior.
  • Migration compatibility across hosts: the CPU model can be a contract that must hold everywhere the VM may land.

When you pick host, QEMU tries to expose a CPU that matches the host closely, enabling whatever the physical CPU supports. Great for performance. Also great for painting yourself into a corner if you later move the VM to a host with fewer features or different microarchitecture.

When you pick a named CPU model (e.g., Skylake-Server, EPYC, Broadwell), QEMU exposes a curated set of flags. This can improve migration compatibility, but it can also fail hard if:

  • The host CPU doesn’t support one of the requested features.
  • The QEMU version on the node doesn’t recognize the model name.
  • The VM has an explicit args: line forcing flags that conflict with the chosen model.
  • You are crossing Intel ↔ AMD boundaries and the model assumes vendor-specific behavior.

And sometimes the VM does start, but the guest OS objects after the fact. Windows might decide it’s on “new hardware,” and Linux might shift clocksource behavior or disable certain kernel fast paths.

Dry-funny reality check (joke #1): CPU type changes are like “quick refactors.” The word “quick” is mostly for emotional support.

Interesting facts and historical context

These are not trivia for trivia’s sake. Each explains why CPU type changes can break a previously stable VM.

  1. CPUID has been a compatibility minefield since the 1990s. Guests don’t just “use what exists”; they make decisions based on reported flags and vendor IDs.
  2. KVM is not an emulator first; it’s an accelerator. If you request CPU features the host can’t provide, KVM refuses—fast and unapologetic.
  3. QEMU CPU model names are not eternal. Models and aliases evolve across QEMU versions; Proxmox upgrades can add models, deprecate others, or change defaults.
  4. Live migration was the forcing function for CPU baselines. The whole reason “generic” CPU types exist is to make “move this VM anywhere” practical.
  5. Intel and AMD virtualization differ in edge behavior. Even when both support the same instruction sets, vendor-specific features and microcode quirks matter for guests and hypervisors.
  6. Spectre/Meltdown era mitigation impacted virtualization. Microcode and kernel mitigations changed performance and sometimes altered exposed features; clusters saw “identical CPUs” behave differently after patching.
  7. Windows activation can treat CPU/board identity changes as hardware swaps. Changing CPU model can be enough to trip licensing checks depending on edition and activation method.
  8. Nested virtualization is extra sensitive. If the guest runs a hypervisor, it often needs specific VMX/SVM flags; CPU model changes can remove them.

Practical recovery tasks (commands, outputs, decisions)

Below are hands-on tasks you can run on the Proxmox node. Each includes: the command, what typical output means, and what decision you make from it. Do them in order until you’ve got a booting VM or a clear root cause.

Task 1: Identify the failing VM start event in the journal

cr0x@server:~$ journalctl -u pvedaemon -u pveproxy -u pvestatd --since "30 min ago" | tail -n 80
Dec 26 11:02:15 pve01 pvedaemon[2217]: starting task UPID:pve01:00004A3C:0001B2D1:676D1D97:qmstart:105:root@pam:
Dec 26 11:02:15 pve01 pvedaemon[2217]: VM 105 start failed: command '/usr/bin/kvm -id 105 ... -cpu Skylake-Server,...' failed: exit code 1
Dec 26 11:02:15 pve01 pvedaemon[2217]: end task UPID:pve01:00004A3C:0001B2D1:676D1D97:qmstart:105:root@pam: VM 105 start failed: exit code 1

What it means: You’ve confirmed this is a QEMU/KVM start failure, not a guest OS crash after boot.

Decision: Go look for the exact QEMU error (Task 2). Don’t guess.

Task 2: Pull the detailed QEMU error from the task log

cr0x@server:~$ cat /var/log/pve/tasks/active | head
UPID:pve01:00004A3C:0001B2D1:676D1D97:qmstart:105:root@pam:
cr0x@server:~$ cat /var/log/pve/tasks/UPID:pve01:00004A3C:0001B2D1:676D1D97:qmstart:105:root@pam: | tail -n 60
kvm: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
TASK ERROR: start failed: QEMU exited with code 1

What it means: The selected CPU model (or custom flags) requests VMX, but the host can’t provide it (or it’s disabled).

Decision: Check host CPU virtualization support and BIOS settings (Task 3 and Task 4). If this VM doesn’t need nested virtualization, remove that requirement in CPU flags (later tasks).

Task 3: Verify KVM is loaded and virtualization is available

cr0x@server:~$ lsmod | egrep 'kvm|vhost' | head
kvm_intel             385024  0
kvm                  1200128  1 kvm_intel
vhost_net              36864  0
vhost                  53248  1 vhost_net

What it means: KVM modules are loaded. That’s necessary but not sufficient.

Decision: Confirm CPU flags exposed by the host (Task 4).

Task 4: Confirm the host CPU flags include what the VM is requesting

cr0x@server:~$ lscpu | egrep 'Vendor ID|Model name|Flags|Virtualization' -A1
Vendor ID:           GenuineIntel
Model name:          Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
Virtualization:      VT-x
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr ... vmx ... avx avx2 ...

What it means: VT-x (vmx flag) exists. If QEMU said VMX is unsupported, either BIOS toggled it off, the kernel is masking it, or you’re not actually on the node you think you are.

Decision: If Virtualization is blank or the flag is missing, fix BIOS/firmware settings. If it’s present, your VM config likely forces something inconsistent (Task 6 and Task 7).

Task 5: Inspect the current VM config for CPU type and forced flags

cr0x@server:~$ qm config 105
boot: order=scsi0;net0
cores: 8
cpu: Skylake-Server,flags=+pcid;+spec-ctrl;+md-clear
memory: 16384
name: payroll-app01
net0: virtio=DE:AD:BE:EF:10:05,bridge=vmbr0
scsi0: local-lvm:vm-105-disk-0,discard=on,iothread=1
scsihw: virtio-scsi-pci

What it means: CPU model is explicitly set to Skylake-Server with extra flags. If the host is older or AMD, this can fail.

Decision: If you changed this recently and the VM used to start, revert to the prior CPU type (Task 8). If you need migration compatibility, pick a baseline later.

Task 6: Check for hidden overrides in args:

cr0x@server:~$ grep -nE '^(cpu:|args:|machine:|bios:|ostype:)' /etc/pve/qemu-server/105.conf
3:cpu: Skylake-Server,flags=+pcid;+spec-ctrl;+md-clear

What it means: No args: surprises here. Good. If you do see an args: line, treat it as guilty until proven innocent.

Decision: If args: is present and includes -cpu or -machine, remove/adjust it; Proxmox GUI settings won’t save you from your past self.

Task 7: Validate that the CPU model name is recognized by your QEMU build

cr0x@server:~$ /usr/bin/qemu-system-x86_64 -cpu help | egrep -i 'skylake|broadwell|epyc|x86-64' | head -n 20
x86-64-v2
x86-64-v3
Broadwell
Skylake-Server
EPYC
EPYC-Rome

What it means: The model exists on this node. If it didn’t, QEMU would fail immediately with “invalid CPU model.”

Decision: If the model is missing on some nodes in the cluster, you have a version skew problem. Align Proxmox/QEMU versions across nodes or standardize on a model available everywhere.

Task 8: Revert CPU type to the last known good (fastest restore)

cr0x@server:~$ cp -a /etc/pve/qemu-server/105.conf /root/105.conf.before-revert
cr0x@server:~$ qm set 105 --cpu host
update VM 105: -cpu host
cr0x@server:~$ qm start 105

What it means: You’re reverting to host, which usually starts if the VM previously ran on this node.

Decision: If this boots, stop the bleeding. Then decide whether you actually need a different CPU model (later section). If it still fails, the problem isn’t just “model name”; it may be flags, machine type, or hardware virtualization availability.

Task 9: If host fails, try a safer generic baseline

cr0x@server:~$ qm stop 105
stopping VM 105 (timeout = 60 seconds)
cr0x@server:~$ qm set 105 --cpu x86-64-v2
update VM 105: -cpu x86-64-v2
cr0x@server:~$ qm start 105

What it means: You’re asking for a conservative set of CPU features that tends to work across many modern hosts.

Decision: If this starts, you likely had an unsupported feature in the old model or flags. Keep x86-64-v2 (or v3 if the cluster supports it) as a cluster-wide baseline.

Task 10: Confirm the VM process exists and is using KVM acceleration

cr0x@server:~$ pgrep -a -f 'kvm.*-id 105' | head -n 1
24788 /usr/bin/kvm -id 105 -name payroll-app01 ...
cr0x@server:~$ ps -o pid,comm,%cpu,%mem,args -p 24788
  PID COMMAND         %CPU %MEM COMMAND
24788 kvm              8.2  4.1 /usr/bin/kvm -id 105 -name payroll-app01 ...

What it means: The VM actually started and is running as a QEMU/KVM process.

Decision: Move from “hypervisor can start it” to “guest is healthy.” If the VM is still not reachable, troubleshoot inside the guest (console, network, services).

Task 11: Check the QEMU command line for the effective CPU definition

cr0x@server:~$ qm showcmd 105 --pretty | sed -n '1,120p'
/usr/bin/kvm \
  -id 105 \
  -name payroll-app01,debug-threads=on \
  -no-shutdown \
  -chardev socket,id=qmp,path=/var/run/qemu-server/105.qmp,server=on,wait=off \
  -mon chardev=qmp,mode=control \
  -machine type=pc-q35-8.1+pve0 \
  -cpu x86-64-v2 \
  -smp 8,sockets=1,cores=8,maxcpus=8 \
  -m 16384 \
  ...

What it means: This is the single source of truth for what QEMU is actually executing (after Proxmox transforms config).

Decision: If you see a different -cpu than expected, you have conflicting config entries or an args: override.

Task 12: Validate cluster CPU feature compatibility (migration reality)

cr0x@server:~$ pvesh get /nodes --output-format json-pretty
[
  {
    "node": "pve01",
    "status": "online"
  },
  {
    "node": "pve02",
    "status": "online"
  }
]
cr0x@server:~$ for n in pve01 pve02; do echo "== $n =="; ssh $n "lscpu | egrep 'Vendor ID|Model name|Flags' -A1 | head -n 6"; done
== pve01 ==
Vendor ID:           GenuineIntel
Model name:          Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
Flags:               ... avx avx2 vmx ...
== pve02 ==
Vendor ID:           GenuineIntel
Model name:          Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
Flags:               ... avx avx2 vmx ...

What it means: The nodes look similar, but not identical. Small differences can matter: one CPU stepping, one microcode level, one BIOS setting, and suddenly a flag disappears.

Decision: If you expect migration, choose a CPU baseline supported by all nodes. “Works on my node” is not a strategy.

Task 13: Check microcode and kernel messages if features “should exist” but don’t

cr0x@server:~$ dmesg | egrep -i 'microcode|kvm|vmx|svm' | tail -n 30
[    0.812345] microcode: microcode updated early to revision 0x2006e04, date = 2023-10-12
[    6.112233] kvm: VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL does not work properly. Using workaround
[    6.112455] kvm_intel: enabling virtualization on CPU0

What it means: Microcode and KVM initialization are visible. If KVM complains about disabled virtualization, that’s your smoking gun.

Decision: If virtualization is disabled at the platform level, fix BIOS and reboot the node. Don’t keep “working around” missing VMX/SVM.

Task 14: If Windows won’t boot after CPU change, check for boot-loop vs. hypervisor failure

cr0x@server:~$ qm terminal 105
starting serial terminal on interface serial0 (press Ctrl+O to exit)

What it means: A serial terminal is a fast way to see early boot problems on Linux; for Windows it’s less helpful unless you configured serial debugging. If you have VGA console in the GUI, use that.

Decision: If the VM starts but immediately reboots, you’re past the CPU model start failure and into guest behavior. Keep the hypervisor stable and troubleshoot the guest OS separately.

Task 15: Restore a known-good VM config file if you need an instant rollback

cr0x@server:~$ ls -l /root/105.conf.*
-rw-r--r-- 1 root root 512 Dec 26 11:05 /root/105.conf.before-revert
cr0x@server:~$ cp -a /root/105.conf.before-revert /etc/pve/qemu-server/105.conf
cr0x@server:~$ qm start 105

What it means: You’re overriding whatever the UI did and returning to a saved config snapshot.

Decision: If this fixes it, treat the prior CPU change as the root cause and proceed to a controlled re-change with testing, not vibes.

Task 16: Verify you’re not fighting a machine-type mismatch introduced by “helpful” edits

cr0x@server:~$ qm config 105 | egrep '^(machine:|bios:|efidisk0:|ostype:)'
machine: pc-q35-8.1+pve0
ostype: l26

What it means: Machine type is Q35. Some CPU-model changes coincide with a machine-type change (manual edits, older guides). That combination can alter device exposure and guest boot behavior.

Decision: If you changed machine type around the same time, revert it too. Change one variable at a time unless you enjoy unforced errors.

Dry-funny reality check (joke #2): If you changed CPU type, machine type, and BIOS mode in one edit, you didn’t “optimize.” You created a mystery novel.

Common mistakes: symptom → root cause → fix

This section is deliberately blunt. These are the patterns that keep showing up in production.

1) Symptom: “QEMU exited with code 1” + “host doesn’t support requested feature”

  • Root cause: Selected CPU model or flags include features missing on the host (or disabled in BIOS).
  • Fix: Revert to host or a lower baseline like x86-64-v2. If the missing feature is VMX/SVM, fix BIOS virtualization settings and reboot the host.

2) Symptom: “invalid CPU model” after selecting a named CPU

  • Root cause: QEMU build on that node doesn’t know the CPU model name (version skew, different package set, or old Proxmox node).
  • Fix: Use /usr/bin/qemu-system-x86_64 -cpu help to choose an available model. Standardize Proxmox/QEMU versions across cluster nodes.

3) Symptom: VM starts on node A, refuses to start after migrating to node B

  • Root cause: CPU type set to host or a too-modern model; node B lacks the required flags.
  • Fix: Choose a cluster baseline CPU model supported across all nodes. Apply it to the VM before migration (planned maintenance window if the guest is sensitive).

4) Symptom: Linux boots, but performance tanks after CPU type change

  • Root cause: You removed AVX/AVX2/AES or other accelerations the workload used (crypto, compression, JVM, database checksums). Or the guest changed clocksource behavior.
  • Fix: Compare CPU flags before/after. Select a baseline that retains needed features and is still migratable. Validate clocksource and kernel messages.

5) Symptom: Windows boots, but activation/licensing complains

  • Root cause: Windows sees a hardware identity shift (CPU model/topology changes can contribute).
  • Fix: Prefer stable CPU models long-term. If you must change, do it once, document it, and expect activation workflows depending on your licensing method.

6) Symptom: Nested virtualization stopped working

  • Root cause: CPU model change removed vmx (Intel) or svm (AMD) exposure to the guest.
  • Fix: Pick a CPU model that includes virtualization extensions and confirm host BIOS supports them. Avoid random flag surgery; test nested workloads explicitly.

7) Symptom: “KVM: entry failed” or KVM-related initialization errors

  • Root cause: Host kernel/KVM restrictions, microcode quirks, or incompatible combination of CPU flags and machine type.
  • Fix: Remove custom flags and revert to a known-good CPU model. If the host recently changed kernel/microcode, consider aligning nodes and rebooting to a consistent baseline.

8) Symptom: VM only starts after you remove a single “security mitigation” flag

  • Root cause: You copied a CPU flag set from a different generation of CPU. Some mitigation flags map to specific capabilities and MSRs that may not exist everywhere.
  • Fix: Stop hand-enabling mitigation flags unless you know exactly what they do. Use supported CPU models and let QEMU/KVM handle sane defaults for your versions.

Three corporate mini-stories from the trenches

Mini-story 1: The incident caused by a wrong assumption

A mid-sized company ran a two-node Proxmox cluster that “obviously” had the same CPUs. Purchasing had ordered two servers from the same vendor in the same quarter. The spec sheets matched. The team set most production VMs to cpu: host for maximum performance.

During a routine host reboot, several VMs failed to start on the surviving node. The error looked like a generic QEMU failure at first glance. Under pressure, the team chased storage and network ghosts—because that’s what usually breaks in their world.

The root cause: one node had a slightly newer CPU stepping and different microcode, exposing a couple of flags the other node didn’t. When the VMs landed on the older node, KVM refused the requested CPUID feature set. The assumption wasn’t “these servers are identical.” It was “identical is close enough.” It isn’t.

They recovered by switching the affected VMs to a conservative baseline CPU model that both nodes supported, then scheduled a proper hardware inventory and standardization pass. The outage wasn’t dramatic. It was worse: it was avoidable and boring.

The lasting fix was cultural: any change that affects migration compatibility now requires verifying the CPU model exists and is compatible across nodes before the maintenance window.

Mini-story 2: The optimization that backfired

An internal platform team hosted build workers and CI runners in Proxmox VMs. They were chasing faster compilation times, so they changed CPU type from a baseline model to host and added extra flags for AES and AVX2 “just in case.” Performance did improve on the primary node.

Then a node went into maintenance, and the scheduler moved VMs around. Some guests started, some didn’t. The ones that started produced weird, non-reproducible build failures. That’s the kind of bug that makes engineers question their career choices.

The issue wasn’t “AVX2 is bad.” The issue was heterogeneity: one node supported a set of features and microarchitectural behavior the other didn’t, and the runners weren’t pinned. The build system ended up with a fleet where binaries were sometimes built under different CPU capabilities, and tests behaved differently under different timing and crypto acceleration paths.

They rolled back to a baseline CPU type for runners, and only enabled “host” CPU for pinned, non-migrating performance workloads with explicit placement rules. The team also stopped sprinkling CPU flags like salt. If you don’t have a measurement and a rollback plan, you’re not optimizing—you’re gambling.

Mini-story 3: The boring but correct practice that saved the day

A finance org ran critical Windows VMs on Proxmox. They weren’t flashy about it; they just had discipline. Before any hardware-related VM change (CPU, firmware/BIOS mode, machine type), they took two actions: a config snapshot of the VM configuration file and a tested rollback procedure.

One Friday, someone changed a VM’s CPU type to “match the other servers” so they could live-migrate during patching. The VM wouldn’t start. The error message mentioned an unsupported CPUID feature. Stress level rose; it was payroll week, of course.

The on-call SRE did not debate with the universe. They restored the prior /etc/pve/qemu-server/<vmid>.conf from their saved copy, started the VM, and got the business back. Then they opened a change record with two follow-ups: pick a supported baseline CPU model across nodes and schedule the change during a window where re-activation and validation could happen.

That’s the thing about boring practices: they look slow until the day you need them. Then they’re the fastest thing you’ve got.

Checklists / step-by-step plan

Step-by-step recovery plan (minimum downtime mindset)

  1. Capture evidence. Save the current VM config and the task log output. You want something to learn from, not just a restored service.
  2. Read the QEMU/KVM error. Don’t touch settings until you know whether it’s “invalid model” vs “unsupported feature” vs “KVM init failure.”
  3. Rollback CPU type to last-known-good. If the VM was running earlier today, revert. Start the VM. Restore service.
  4. If rollback fails, remove customization. Delete custom flags and any args: lines affecting -cpu. Keep the config simple until it boots.
  5. If it still fails, verify host virtualization. Check KVM modules, CPU flags, and BIOS settings. Reboot host if needed.
  6. Once booted, validate guest health. Console access, disks, network, services. Watch for licensing and driver issues.
  7. Only then re-attempt a CPU change. Pick a baseline model that exists on all nodes. Test-start on each node if migration matters.

Checklist: choosing a CPU type that won’t betray you later

  • If the VM never migrates and you want performance: cpu: host is fine, but document that it’s pinned to compatible hardware.
  • If the VM might migrate within a homogeneous cluster: pick a named model that exists and is supported across all nodes.
  • If the cluster is mixed or you expect future hardware changes: choose a conservative baseline (x86-64-v2 or similar) and accept the small performance trade-off.
  • Avoid hand-tuned CPU flags unless you can explain each one, test it, and roll it back.

Checklist: safe change procedure (what I’d do in production)

  1. Record current config: copy /etc/pve/qemu-server/<vmid>.conf.
  2. Confirm QEMU supports the target CPU model on every node.
  3. Confirm the physical CPUs support the needed flags on every node.
  4. Plan for guest side-effects (Windows activation, nested virtualization, kernel clocksource changes).
  5. Change only CPU type first. Boot and validate.
  6. Only after success: consider other tuning (NUMA, hugepages, pinning), one change at a time.

FAQ

1) Why does Proxmox let me select a CPU type that the host can’t run?

Because Proxmox is not your mom. It passes your request to QEMU/KVM; KVM enforces hardware reality at start time. Some incompatibilities are only known when QEMU assembles the final CPU definition.

2) Is cpu: host always the best choice?

Best for single-node performance, often. Worst for migration across mixed hardware. If you need mobility, pick a baseline CPU model and accept that “maximum flags everywhere” is not the same as “reliable operations.”

3) What’s the difference between named models (Skylake, EPYC) and x86-64-v2/v3?

Named models map to specific microarchitectures with specific flag bundles. The x86-64-v* levels aim for a standardized minimum ISA baseline. Baselines are usually better for portability; named models can be better for predictable performance within a known fleet.

4) The VM starts after changing CPU type, but performance got worse. Should I revert?

If the workload uses crypto/compression/vector instructions, you probably removed a feature it relied on. Confirm by comparing CPU flags in-guest before/after (or by checking the effective QEMU CPU line). If performance is business-critical, revert and choose a baseline that preserves the needed features across nodes.

5) Can a CPU type change corrupt disks or data?

Not directly. CPU model changes affect instruction exposure, not storage writes. The real risk is indirect: crashes, guest filesystem not shutting down cleanly, or application-level issues after unstable boots. Treat it like any hardware identity change: validate the guest and apps.

6) Why did live migration suddenly fail after I changed CPU type?

Live migration needs CPU compatibility between source and target. If you changed to host on a newer CPU, the VM may now require features absent on another node. Use a cluster baseline CPU type to make migration predictable.

7) Do I need to shut down the VM to change CPU type?

Yes, in the practical sense. A CPU model is established at VM start. You can change the config live, but it won’t apply until the next boot. Also: changing CPU type on a running production VM and “we’ll reboot later” is how you set traps for future-you.

8) How do I know what CPU model the guest is actually seeing?

On the host, use qm showcmd <vmid> to see the effective -cpu argument. Inside Linux guests, lscpu and /proc/cpuinfo show what the guest believes. On Windows, check Task Manager CPU name and use system info tools.

9) What’s the safest “generic” CPU type for a mixed Intel cluster?

Often x86-64-v2 is a safe starting point if your nodes are reasonably modern. If your nodes are newer and you’ve verified support everywhere, x86-64-v3 can be a good compromise. Don’t guess—validate on every node.

10) We have Intel and AMD nodes. Can we migrate the same VM across both?

Sometimes, but it’s tricky. Vendor differences, exposed CPUID vendor strings, and feature sets can break assumptions. If you must do it, use conservative baselines and test migrations. In many environments, the correct answer is “don’t mix vendors in a migration domain.”

Conclusion: next steps that prevent repeats

When a Proxmox VM won’t start after a CPU type change, the winning move is almost always: read the QEMU error, revert to last-known-good, boot the VM, then pick a CPU baseline that matches your operational reality.

Practical next steps:

  • Standardize CPU type policy per VM class: pinned performance VMs can use host; migratable VMs must use a baseline supported across all nodes.
  • Eliminate stealth overrides: remove risky args: lines unless you can justify them and test them.
  • Inventory and align the cluster: CPU models, microcode, BIOS virtualization settings, and Proxmox/QEMU versions should not drift casually.
  • Make config backups boring and routine: saving a VM config file before changes is the kind of dull habit that keeps outages short.

One reliability paraphrased idea from Werner Vogels: Everything fails, so build systems expecting failure and make recovery fast. That applies here. CPU type changes are survivable—if you treat them like real production changes, not a dropdown adventure.

← Previous
Reduced Motion Support: prefers-reduced-motion Done Properly
Next →
ZFS Sequential Resilver: Why Rebuild Speed Isn’t Just “Disk MB/s”

Leave a comment