“No Boot Device.” Three words that instantly turn a calm morning into an incident bridge. The files are probably still there. The computer just can’t find the tiny chain of instructions that gets you from firmware to an operating system.
This guide is for repairing boot on GPT/MBR systems without wrecking data. We’ll treat the disk like evidence, not a scratchpad. You’ll learn how to tell “wrong boot mode” from “broken bootloader” from “disk is lying,” and what to do about each—using commands you can run and outputs you can interpret.
What “No Boot Device” really means (and what it doesn’t)
Firmware (BIOS or UEFI) is trying to locate a bootable target. It fails early, before your OS has a chance to speak up. This is not an operating system problem yet; it’s a “path to the OS” problem.
That path is different depending on boot style:
- Legacy BIOS + MBR: BIOS reads the disk’s first sector (MBR). MBR points to boot code (often a “stage 1.5”) and then to the OS loader.
- Legacy BIOS + GPT (yes, it exists): BIOS can’t natively boot GPT; you typically need a BIOS boot partition for GRUB’s core image. Some systems fake it poorly.
- UEFI + GPT: UEFI reads an EFI System Partition (ESP), a FAT32 partition containing
.efibinaries. It uses NVRAM boot entries to pick which.efito run.
When you see “No Boot Device,” you’re usually in one of these buckets:
- Firmware is looking in the wrong place: wrong boot order, wrong disk, wrong UEFI/Legacy mode.
- The partition table is fine, but boot metadata is broken: missing ESP files, missing Windows BCD, broken GRUB install, deleted NVRAM entry.
- The disk is present but not readable: SATA mode changed, NVMe not detected, RAID mode toggled, controller weirdness, real disk failure.
- Someone “fixed” it already: and their fix wrote to the wrong disk.
One more thing: boot repair is a write operation. Even “harmless” tools can scribble on the first megabytes. Treat this like changing production firewall rules: snapshot first, change second, verify third.
Facts & history that actually help you debug
Boot problems are easier when you know why the world is this way. Here are some concrete facts that show up in real incidents:
- MBR is small by design: the classic MBR boot code lives in the first 446 bytes of sector 0, leaving 64 bytes for four partition entries and 2 bytes for the 0x55AA signature.
- GPT exists because MBR ran out of runway: MBR’s 32-bit sector addressing caps you around 2 TiB with 512-byte sectors; GPT uses 64-bit LBAs and doesn’t have that limit.
- GPT still keeps a “protective MBR”: sector 0 often contains an MBR that says “one big partition” of type 0xEE so old tools don’t trash the disk.
- UEFI doesn’t “scan for an OS” the way people assume: it uses NVRAM boot entries pointing to specific EFI binaries on a specific disk/partition path.
- The EFI System Partition is boring FAT on purpose: FAT32 was chosen for broad firmware compatibility, not because anyone loves it.
- Secure Boot changes the failure modes: the disk can be perfect and the bootloader intact, but the firmware refuses to run an unsigned (or differently signed) loader.
- Windows and Linux can coexist peacefully, but they disagree on who’s “in charge”: Windows updates sometimes restore Windows Boot Manager as the default, while Linux installs often put GRUB first.
- BIOS booting from GPT is a special case: GRUB needs a tiny unformatted “BIOS boot partition” (often 1–2 MiB) to stash core pieces it can’t fit in the MBR gap.
- Cloning disks can silently duplicate disk identifiers: this breaks boot entries and sometimes confuses OS-level mounting because the system sees “two of the same disk” identities.
Fast diagnosis playbook (check first/second/third)
First: firmware reality check (60 seconds)
- Is the disk even detected? If the BIOS/UEFI doesn’t see it, stop. This is cabling, controller mode, or hardware.
- Are you in UEFI mode or Legacy/CSM? The wrong mode is the #1 “nothing is wrong with the disk” incident.
- Boot order: if you just plugged in a USB drive, some firmware politely decides it’s the new king.
Second: identify the disk and partition style (5 minutes)
- From a live environment, determine: GPT or MBR? Is there an ESP? Is there a BIOS boot partition? Is the OS partition present?
- If Windows: locate the EFI partition and the Windows partition. If Linux: locate
/boot,/boot/efi, and root.
Third: decide what you’re repairing (10 minutes)
- UEFI/GPT: repair NVRAM boot entry and/or restore EFI files; possibly rebuild BCD or reinstall GRUB EFI.
- BIOS/MBR: repair MBR boot code, active partition flag (for some setups), and OS bootloader chain.
- BIOS/GPT: ensure BIOS boot partition exists; reinstall GRUB to the disk (not a partition).
Stop conditions (when you should not “just try stuff”)
- SMART shows obvious failure (reallocated sectors climbing, read errors). Clone first.
- You’re not sure which disk is which. Label them. Disconnect extras if you can.
- The system is encrypted and you don’t know the recovery process. Repairing boot is easy; repairing “oops, I broke the only unlock path” is not.
Before you touch anything: safety rules that prevent data loss
If you want “without data loss,” you need to act like you mean it.
Rule 1: Don’t write until you can explain what you’re writing
Commands like bootrec /FixMbr, grub-install, or efibootmgr -c are not “diagnostics.” They modify disk or NVRAM state. Use them only after you’ve identified the correct disk and boot mode.
Rule 2: Clone or image if the disk looks sick
If the disk is failing, every reboot and every filesystem scan is a coin flip. Make an image to another disk and work on the copy.
Rule 3: Prefer reversible changes
Changing UEFI boot order, creating a new NVRAM entry, or reinstalling a bootloader to the ESP is usually reversible. Converting GPT↔MBR in-place is not a “repair”; it’s a migration with sharp edges.
Rule 4: One disk connected is the best kind of disk management
If it’s a laptop, you may not have the luxury. On desktops and servers, unplug all non-target disks. It prevents the classic mistake: you fixed the wrong drive perfectly.
Short joke #1: Boot repair is like surgery: it’s easy to be brave when it’s not your data on the table.
Practical tasks: commands, outputs, decisions (12+)
These tasks assume you booted a Linux live USB (because it’s a good neutral toolbox) unless the task is Windows-specific. Commands are realistic. Outputs are examples; yours will differ. The point is how to interpret them and decide.
Task 1: Confirm whether the live environment booted in UEFI or BIOS mode
cr0x@server:~$ ls -ld /sys/firmware/efi
drwxr-xr-x 5 root root 0 Feb 4 10:12 /sys/firmware/efi
What it means: If that directory exists, you’re running in UEFI mode. If it doesn’t, you’re in Legacy BIOS mode.
Decision: Match the repair mode to the installed OS mode. If Windows was installed in UEFI mode but you booted the live USB in Legacy mode, you’ll chase ghosts.
Task 2: Identify disks and partition tables quickly
cr0x@server:~$ lsblk -o NAME,SIZE,TYPE,FSTYPE,PARTTYPE,PARTLABEL,MOUNTPOINTS
NAME SIZE TYPE FSTYPE PARTTYPE PARTLABEL MOUNTPOINTS
nvme0n1 476.9G disk
├─nvme0n1p1 260M part vfat c12a7328-f81f-11d2-ba4b-00a0c93ec93b EFI System
├─nvme0n1p2 16M part e3c9e316-0b5c-4db8-817d-f92df00215ae Microsoft reserved
├─nvme0n1p3 200G part ntfs ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 Basic data
└─nvme0n1p4 276.6G part ext4 0fc63daf-8483-4772-8e79-3d69d8477de4 Linux filesystem
What it means: GPT partition type GUIDs appear in PARTTYPE. The ESP is vfat and has the well-known EFI GUID.
Decision: This is a UEFI/GPT system. Boot repair will focus on the ESP and UEFI boot entries, not MBR “active” flags.
Task 3: Verify GPT vs MBR with parted (more explicit)
cr0x@server:~$ sudo parted -s /dev/nvme0n1 print
Model: Samsung SSD 970 (nvme)
Disk /dev/nvme0n1: 512GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 274MB 273MB fat32 EFI System Partition boot, esp
2 274MB 291MB 16.8MB Microsoft reserved partition msftres
3 291MB 215GB 215GB ntfs Basic data partition msftdata
4 215GB 512GB 297GB ext4
What it means: Partition Table: gpt and the boot, esp flags indicate the EFI System Partition.
Decision: Do not waste time with MBR repair commands. You may need to mount the ESP and restore bootloader files.
Task 4: Check whether the ESP contains plausible boot files
cr0x@server:~$ sudo mkdir -p /mnt/esp
cr0x@server:~$ sudo mount /dev/nvme0n1p1 /mnt/esp
cr0x@server:~$ ls -R /mnt/esp/EFI | head
/mnt/esp/EFI:
Boot
Microsoft
ubuntu
/mnt/esp/EFI/Boot:
BOOTX64.EFI
What it means: You have standard directories. EFI/Boot/BOOTX64.EFI is the “fallback path” many firmwares will try if NVRAM entries are missing.
Decision: If /mnt/esp/EFI is empty or missing Microsoft/ubuntu, you likely have an ESP corruption or deletion problem. Repair will be “restore files,” not “repartition disk.”
Task 5: Inspect UEFI NVRAM boot entries (UEFI systems)
cr0x@server:~$ sudo efibootmgr -v
BootCurrent: 0002
Timeout: 1 seconds
BootOrder: 0002,0000,0001
Boot0000* Windows Boot Manager HD(1,GPT,3c4b2c2e-...,0x800,0x82000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)
Boot0001* ubuntu HD(1,GPT,3c4b2c2e-...,0x800,0x82000)/File(\EFI\ubuntu\shimx64.efi)
Boot0002* UEFI: USB Flash Disk PciRoot(0x0)/Pci(0x14,0x0)/USB(0x3,0x0)/HD(1,MBR,0x1234,0x800,0x3a9800)
What it means: Entries exist for Windows and Ubuntu. That’s good. If your system still says “No Boot Device,” it might be booting in Legacy mode, or the disk isn’t detected at boot, or Secure Boot blocks the loader.
Decision: If Windows Boot Manager entry is missing, recreate it. If entries exist but point to the wrong disk GUID (common after cloning), fix them or use fallback boot.
Task 6: Recreate a missing Windows Boot Manager entry (UEFI)
cr0x@server:~$ sudo efibootmgr -c -d /dev/nvme0n1 -p 1 -L "Windows Boot Manager" -l '\EFI\Microsoft\Boot\bootmgfw.efi'
BootCurrent: 0002
BootOrder: 0003,0002,0000,0001
Boot0003* Windows Boot Manager HD(1,GPT,3c4b2c2e-...,0x800,0x82000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)
What it means: You created a new NVRAM entry pointing at the Microsoft EFI binary on partition 1.
Decision: Reboot and select it in firmware. If it boots, you’re done. If firmware forgets the entry after reboot, suspect CMOS battery/firmware bug or “Fast Boot” weirdness; rely on fallback path or firmware updates.
Task 7: Check for BIOS/MBR boot flags and “active” partition (legacy systems)
cr0x@server:~$ sudo fdisk -l /dev/sda
Disk /dev/sda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disklabel type: dos
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1026047 1024000 500M 7 HPFS/NTFS/exFAT
/dev/sda2 1026048 1048575999 1047549952 499.5G 7 HPFS/NTFS/exFAT
What it means: Disklabel type dos indicates MBR. The * marks the active (bootable) partition—relevant for some BIOS boot chains.
Decision: If there’s no active partition in a BIOS/Windows setup, set it (carefully) on the system-reserved/boot partition, not the data partition.
Task 8: Set the active flag on the correct partition (MBR)
cr0x@server:~$ sudo fdisk /dev/sda
Command (m for help): a
Partition number (1-2): 1
Command (m for help): p
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1026047 1024000 500M 7 HPFS/NTFS/exFAT
/dev/sda2 1026048 1048575999 1047549952 499.5G 7 HPFS/NTFS/exFAT
Command (m for help): w
The partition table has been altered!
What it means: You toggled the bootable flag.
Decision: Reboot and test. If the boot still fails, you likely need to repair boot code (MBR/VBR/BCD) rather than flags.
Task 9: Assess disk health before writing boot structures
cr0x@server:~$ sudo smartctl -a /dev/sda | egrep -i "Reallocated|Pending|Uncorrect|SMART overall"
SMART overall-health self-assessment test result: PASSED
Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
What it means: No obvious signs of media failure. “PASSED” is not a guarantee, but it’s not screaming either.
Decision: Proceed with boot repair. If pending sectors were non-zero, you’d clone first and avoid write-heavy tools.
Task 10: Clone a disk before repairs when you smell trouble
cr0x@server:~$ sudo ddrescue -f -n /dev/sda /dev/sdb /mnt/recovery/ddrescue.log
GNU ddrescue 1.27
Press Ctrl-C to interrupt
Initial status (read from logfile)
rescued: 0 B, errsize: 0 B, errors: 0
Current status
rescued: 500000 MB, errsize: 0 B, current rate: 120 MB/s
ipos: 500000 MB, errors: 0, average rate: 115 MB/s
Finished
What it means: You copied the source disk to a destination disk while logging recoverability. ddrescue is built for imperfect media.
Decision: Do all further boot repair against /dev/sdb (the clone). Keep /dev/sda untouched as the original evidence.
Task 11: Mount Linux system partitions correctly for chroot repair
cr0x@server:~$ sudo mount /dev/nvme0n1p4 /mnt
cr0x@server:~$ sudo mount /dev/nvme0n1p1 /mnt/boot/efi
cr0x@server:~$ for i in /dev /dev/pts /proc /sys /run; do sudo mount --bind $i /mnt$i; done
cr0x@server:~$ sudo chroot /mnt /bin/bash
root@server:/# ls /boot/efi/EFI
Boot Microsoft ubuntu
What it means: You’re operating inside the installed OS environment, with the ESP mounted where the OS expects it.
Decision: This is the correct context to reinstall GRUB or regenerate initramfs without mangling the live USB’s environment.
Task 12: Reinstall GRUB for UEFI systems (Linux)
cr0x@server:~$ sudo chroot /mnt /bin/bash -c "grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=ubuntu && update-grub"
Installing for x86_64-efi platform.
Installation finished. No error reported.
Generating grub configuration file ...
Found Windows Boot Manager on /dev/nvme0n1p1@/EFI/Microsoft/Boot/bootmgfw.efi
done
What it means: GRUB’s EFI binary and config were installed, and it detected Windows too.
Decision: Reboot and pick “ubuntu” or set boot order. If Secure Boot is enabled and you installed non-signed GRUB, you may need shim packages or to disable Secure Boot temporarily.
Task 13: Install GRUB for BIOS/MBR systems (Linux)
cr0x@server:~$ sudo chroot /mnt /bin/bash -c "grub-install /dev/sda && update-grub"
Installing for i386-pc platform.
Installation finished. No error reported.
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.8.0-31-generic
done
What it means: GRUB is installed to the disk’s MBR boot area (not to a partition), and the config was rebuilt.
Decision: Reboot. If it still fails, verify BIOS is actually set to boot from /dev/sda and that the disk is first in order.
Task 14: Detect GPT/BIOS boot partition requirement (Linux on GPT in BIOS mode)
cr0x@server:~$ sudo parted -s /dev/sda print | sed -n '1,20p'
Model: ATA ST1000DM010 (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB ext4
2 538MB 1000GB 999GB ext4
What it means: GPT disk, but no small partition with bios_grub flag. BIOS booting GRUB from GPT typically needs that.
Decision: If the machine boots in Legacy mode, create a 1–2 MiB BIOS boot partition (unformatted) and set the bios_grub flag—only if you can safely shrink/move partitions. If you can switch the firmware to UEFI mode, do that instead. It’s cleaner.
Task 15: Windows UEFI: rebuild BCD from recovery environment
From Windows Recovery Command Prompt (not Linux), you typically assign a drive letter to the ESP and rebuild boot files. Example:
cr0x@server:~$ diskpart
DISKPART> list vol
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 C NTFS Partition 200 GB Healthy Boot
Volume 1 SYSTEM FAT32 Partition 260 MB Healthy System
DISKPART> sel vol 1
DISKPART> assign letter=S
DISKPART> exit
cr0x@server:~$ bcdboot C:\Windows /s S: /f UEFI
Boot files successfully created.
What it means: bcdboot copied Windows boot files to the ESP and built a BCD store suitable for UEFI.
Decision: Reboot and set Windows Boot Manager first in firmware if needed. If it fails, the ESP may be mounted wrong, or the Windows install isn’t actually at C:\Windows in recovery mode—verify drive letters.
Task 16: Windows BIOS/MBR: repair boot chain (carefully)
cr0x@server:~$ bootrec /FixMbr
The operation completed successfully.
cr0x@server:~$ bootrec /FixBoot
Access is denied.
cr0x@server:~$ bootrec /RebuildBcd
Successfully scanned Windows installations.
Total identified Windows installations: 1
[1] D:\Windows
Add installation to boot list? Yes(Y)/No(N)/All(A): Y
The operation completed successfully.
What it means: /FixMbr wrote MBR boot code. /FixBoot failing with “Access is denied” is common on some Windows builds and can indicate partition permissions/ESP confusion (especially when actually on UEFI/GPT) or recovery environment mismatch.
Decision: If you’re truly on BIOS/MBR, ensure the correct system partition is active and has a boot sector; sometimes bootsect /nt60 SYS is used. If the disk is GPT, stop using BIOS tools and switch to UEFI repair (bcdboot).
Task 17: Verify filesystem integrity on the ESP (FAT can get weird)
cr0x@server:~$ sudo umount /mnt/esp
cr0x@server:~$ sudo fsck.vfat -a /dev/nvme0n1p1
fsck.fat 4.2 (2021-01-31)
FATs differ but appear to be intact. Using first FAT.
Reclaimed 12 unused clusters (49152 bytes) in 3 chains.
What it means: Minor FAT inconsistencies repaired. If it reports severe corruption, you may need to recreate the ESP and restore boot files.
Decision: After repair, remount and verify the EFI directory contents again.
Repair paths by platform and boot style
UEFI + GPT: the “modern” default
This is most laptops and desktops from the last decade. Your recovery revolves around three things: disk partitions, files on the ESP, and NVRAM entries.
- If the disk is not detected in firmware: not a bootloader problem. Check controller mode (AHCI vs RAID), NVMe settings, cabling, firmware updates.
- If the ESP exists but is empty/damaged: restore
EFI/Microsoftviabcdboot(Windows) or reinstall GRUB (Linux). Usefsck.vfatto repair minor corruption. - If ESP is fine but boot entries missing: recreate entries with
efibootmgr(Linux) or let Windows repair recreate them (Windows Recovery). - If entries exist but point to wrong disk ID (after cloning): create new entries referencing the correct disk/partition, then delete old ones.
- If Secure Boot blocks: use signed shim/GRUB packages, enroll keys appropriately, or disable Secure Boot temporarily to confirm diagnosis.
Legacy BIOS + MBR: still alive in appliances and older systems
This path is less “files” and more “tiny boot code segments.” Your targets:
- MBR boot code (first sector)
- Partition boot sector (VBR) for the active partition (Windows) or GRUB stage setup (Linux)
- BCD (Windows) or GRUB config (Linux)
The riskiest mistake here is installing boot code to the wrong disk. On multi-disk systems, it’s common to “fix” the data drive while the OS drive sits untouched and offended.
Legacy BIOS + GPT: the “don’t do this unless you must” setup
It can work, but it’s fragile and often breaks after disk moves or firmware toggles. If you inherit it, your best outcome is usually to switch to UEFI boot if the hardware supports it. Otherwise you’ll need a BIOS boot partition (GRUB’s stash space) and a properly installed GRUB to the disk.
What not to do: in-place conversions during an outage
Converting MBR↔GPT is possible. It’s also a different project than “boot repair,” especially if you’re trying to preserve data. Do it only with backups and change control. If your goal is to boot today, don’t attempt a format-flavored adventure.
Short joke #2: UEFI NVRAM entries are like printer settings: everyone’s sure they’re simple until they’re the only thing between you and a deadline.
Three corporate mini-stories from real life
Incident #1: The wrong assumption (“It’s a dead disk”)
We had a field laptop come back from a trip with “No Boot Device.” The user swore it happened after a battery drain. The helpdesk’s first instinct was “SSD died,” because that’s a clean narrative and everyone loves swapping parts.
Someone replaced the SSD. Still no boot. Now the narrative shifted to “motherboard failure,” because if the new SSD doesn’t boot, the machine must be cursed. A ticket bounced for a day while the user’s deadline approached.
When it landed with an SRE who’d been burned before, the first step was humiliatingly simple: confirm whether the firmware was in UEFI mode. It wasn’t. A BIOS reset had flipped it into Legacy/CSM. The original Windows install was UEFI/GPT. In Legacy mode, the firmware looked for an MBR boot signature, didn’t find one, and gave up.
Switch firmware back to UEFI, Windows Boot Manager reappears, machine boots. The “dead SSD” was fine. The replaced SSD was now a new problem: it had to be wiped and returned to stock inventory correctly.
Lesson: Don’t assume “No Boot Device” means storage failure. It often means the firmware is simply looking through the wrong door.
Incident #2: Optimization that backfired (the “golden image” ESP)
A corporate fleet team wanted faster provisioning. They built a “golden image” disk clone: deploy the same GPT layout, same OS, same apps. It was quick. It worked in the lab. Then it hit production laptops in batches.
A week later: random machines booting to “No Boot Device” after a firmware update. Not all. Just enough to make it interesting. The pattern was messy: mostly machines that had been reimaged recently.
Root cause: the cloning process duplicated disk identifiers and left UEFI NVRAM entries pointing at a specific disk/partition identity that wasn’t stable across the batch. Some firmware tolerated it and “rescanned.” Other firmware didn’t. After update, NVRAM got cleaned and the fallback path wasn’t populated consistently, so the machines had no valid pointer to an EFI binary.
The fix wasn’t magical. It was unglamorous: after imaging, always run a post-step that ensures the ESP has a correct fallback loader and registers boot entries cleanly for that device. And stop relying on firmware behaving the same across vendors.
Lesson: Boot is stateful. Cloning works until it doesn’t—and when it doesn’t, it fails at the worst possible time: first boot after a change.
Incident #3: The boring, correct practice that saved the day (imaging first)
A database engineer reported a workstation that stopped booting after a power outage. It held local analysis datasets that weren’t backed up well. The immediate impulse in the room was to “just run bootrec” because it’s quick and it feels like doing something.
The storage person asked one question: “What’s SMART say?” It showed pending sectors. Not a ton, but enough. The disk wasn’t dead; it was unreliable. Every additional read might be fine—or might be the one that tips it into unrecoverable failure.
They imaged the disk with ddrescue to a known-good SSD, using a logfile so they could retry bad areas without re-reading good ones. Only then did they do boot repairs on the clone. The clone booted after an EFI repair and BCD rebuild. The original disk later degraded further, as predicted.
Nothing about the process was heroic. It was just disciplined: preserve evidence, work on a copy, verify results. That discipline saved the data and avoided the awkward “we fixed boot but lost your files” postmortem.
Lesson: If there’s any hint of disk trouble, copy first. Boot repair is cheap; data recovery is expensive.
One operations maxim worth keeping (paraphrased idea) comes from Richard Cook: Complex systems succeed because people continually adapt and correct for hidden problems.
Common mistakes: symptom → root cause → fix
1) Symptom: “No Boot Device” right after BIOS reset or firmware update
Root cause: Boot mode flipped (UEFI ↔ Legacy/CSM), or boot order reset.
Fix: Set the mode back to what the OS was installed with. If GPT + ESP exists, use UEFI mode. Restore boot order to Windows Boot Manager or your GRUB entry.
2) Symptom: Disk appears in Linux live USB but not in firmware boot menu
Root cause: UEFI boot entry missing; firmware only boots registered entries and ignores fallback, or ESP path is wrong.
Fix: Use efibootmgr -v to inspect entries; create a new entry pointing to \EFI\Microsoft\Boot\bootmgfw.efi or \EFI\ubuntu\shimx64.efi. Ensure fallback \EFI\Boot\BOOTX64.EFI exists.
3) Symptom: Windows repair tools complain “Access is denied” on FixBoot
Root cause: Often you’re actually on UEFI/GPT and using the wrong repair path, or the system partition is not the one you think.
Fix: Prefer bcdboot C:\Windows /s S: /f UEFI after assigning the ESP a drive letter. Verify which volume is the ESP (FAT32, “System”).
4) Symptom: GRUB rescue prompt, “unknown filesystem”
Root cause: GRUB can’t find its modules/config due to moved partitions, overwritten core image, or missing /boot.
Fix: Boot live Linux, mount root and boot, chroot, reinstall GRUB (UEFI: --target=x86_64-efi; BIOS: install to disk), then update-grub.
5) Symptom: Dual-boot system now boots straight into Windows
Root cause: Windows update made Windows Boot Manager first in boot order, or rewrote the default entry.
Fix: Reorder with firmware UI or efibootmgr -o. If GRUB entry is missing, reinstall GRUB to the ESP and recreate the entry.
6) Symptom: After disk cloning, system boots only when the old disk is connected
Root cause: Boot entry points to the original disk’s GUID/location. The clone has different identity or partition GUIDs.
Fix: Create new UEFI entries pointing to the clone. Consider removing duplicates and using fallback EFI/Boot path.
7) Symptom: “No bootable device” only when USB drives are inserted
Root cause: Firmware boot order prioritizes removable media or treats an inserted USB as first boot option, but it isn’t bootable.
Fix: Adjust boot order; disable “boot from USB” if policy allows; or ensure the USB is actually bootable.
8) Symptom: Linux boots, Windows missing from GRUB
Root cause: os-prober disabled by distro policy, or Windows partition not accessible (BitLocker locked, hibernated, or fast startup).
Fix: Disable Windows fast startup; unlock BitLocker if needed; enable and run os-prober where appropriate, then regenerate GRUB config.
Checklists / step-by-step plan
Checklist A: Minimal-risk flow for “No Boot Device” (generic)
- Enter firmware setup: confirm disk is detected.
- Confirm boot mode: UEFI vs Legacy. Don’t guess.
- Set boot order to the correct disk/entry (Windows Boot Manager, or your Linux entry).
- Boot a live USB in the same mode (UEFI or BIOS) as the installed OS.
- Identify disks and partitions (
lsblk,parted). - If SMART looks bad, clone with ddrescue first.
- Mount ESP and inspect EFI directories.
- Inspect and fix UEFI entries (
efibootmgr) or MBR flags (fdisk). - Repair bootloader:
- Windows UEFI:
bcdboot - Windows BIOS:
bootrec(and active partition checks) - Linux UEFI:
grub-install --target=x86_64-efi - Linux BIOS:
grub-install /dev/sdX
- Windows UEFI:
- Reboot once. Verify. Don’t do ten repair loops; that’s how you lose track of state.
Checklist B: UEFI/GPT repair when ESP exists but boot entry is missing
- Mount ESP and verify the target
.efiexists. - Create a new entry with
efibootmgr -cpointing to it. - Optionally create/verify fallback:
\EFI\Boot\BOOTX64.EFI. - Set boot order so the new entry is first.
- Reboot and confirm.
Checklist C: “I don’t trust this disk” flow (data-first)
- Boot live USB, do not run filesystem repairs yet.
- Check SMART; if any pending/uncorrectable sectors appear, stop.
- Clone with ddrescue to a new disk.
- Disconnect original disk.
- Repair boot on the clone.
- After boot is restored, back up data properly. Do not keep living on the clone as if nothing happened.
FAQ
1) Can I fix “No Boot Device” without reinstalling the OS?
Usually, yes. Most of the time the OS partitions are intact and only boot metadata is broken: wrong firmware mode, missing UEFI entry, damaged ESP, broken BCD, or broken GRUB.
2) How do I know if the disk is GPT or MBR?
From Linux: parted -s /dev/sdX print shows “Partition Table: gpt” or “msdos.” From Windows: diskpart → list disk shows a * under GPT for GPT disks.
3) What’s the fastest “tell” that I’m in the wrong boot mode?
If the disk is GPT and has an ESP, but the firmware is set to Legacy/CSM, you often get “No Boot Device.” Conversely, an MBR/BIOS install won’t magically boot in pure UEFI mode without a UEFI loader.
4) Is it safe to run bootrec or grub-install?
It’s safe when you know which disk you’re targeting and you’ve decided that’s the correct repair. It’s unsafe as a guessing game, especially on multi-disk systems. If the disk is failing, clone first.
5) Why does UEFI sometimes “forget” boot entries?
NVRAM can be reset by firmware updates, BIOS resets, or buggy firmware. Some systems also behave badly when disk identifiers change after cloning. Having a valid fallback loader at \EFI\Boot\BOOTX64.EFI helps.
6) Do I need to recreate the EFI System Partition if it’s corrupted?
Not always. Minor FAT corruption can be repaired with fsck.vfat. If the ESP is missing, badly damaged, or the partition table was altered, you may need to recreate it and restore boot files (Windows: bcdboot; Linux: grub-install).
7) What about BitLocker or full-disk encryption?
Encryption changes your priorities. You can often repair the EFI/bootloader without touching encrypted volumes, but make sure you have recovery keys and understand the boot chain. Don’t “repartition to fix it.”
8) My system is dual-boot and now only one OS shows up. Did I lose the other OS?
Not necessarily. Boot menus are just pointers. The other OS is usually still present; you need to repair GRUB config or restore the correct UEFI boot order/entries.
9) Should I convert MBR to GPT (or vice versa) to fix boot?
No, not as a first-line fix. Mode mismatch and missing boot files are far more common. Conversion is a controlled change that requires backups and careful planning.
10) When do I stop trying repairs and call it hardware?
If the disk isn’t detected in firmware, if SMART shows growing read errors/pending sectors, or if you can’t read partitions reliably from a live environment, treat it as hardware or media failure. Image first, then recover.
Conclusion: practical next steps
Boot failures feel dramatic because the system is dead silent, but the fix is often boring: the firmware is in the wrong mode, the ESP is missing files, or the boot entry points to nowhere.
Do this next, in order:
- Confirm disk detection and boot mode in firmware.
- Boot a live environment in the same mode as the installed OS.
- Identify GPT vs MBR, then locate ESP/system partitions.
- If the disk looks unhealthy, clone with ddrescue and repair the clone.
- Repair the correct layer: UEFI entries + ESP files for UEFI/GPT; MBR/active partition/BCD or GRUB for BIOS/MBR.
- Reboot once, verify, then make a real backup. A repaired bootloader is not a backup strategy.