You don’t install Gentoo because you enjoy waiting. You install it because you’re tired of guessing what your system is doing, why it’s slow, and which “helpful defaults” just burned your weekend.
In 2026, Gentoo is still the rare distro that lets you build a system like an operator: measurable, repeatable, and fast for the workload you actually run. The trick is to treat the install as the first production change. Make decisions, record them, and instrument everything.
Install principles: performance comes from repeatability
The Gentoo install isn’t hard because it’s complex. It’s hard because it’s honest. Other systems hide decisions in defaults and post-install scripts; Gentoo makes you pick. That’s a gift if you treat it like a production build pipeline:
- Record decisions in version control. Your
/etc/portage, kernel config, boot config, and disk layout should be reproducible from scratch. If it’s not reproducible, it’s not reliable. - Prefer known-good defaults over “benchmarked on a blog” tweaks. Performance work is mostly removing bottlenecks, not adding flags.
- Measure every change. You can’t improve what you don’t observe, and you can definitely ruin what you don’t measure.
- Optimize for rebuild speed, not just runtime speed. If you can’t rebuild quickly, you won’t patch quickly. That’s how “performance tuning” becomes a security incident.
One quote I’ve seen proven more times than I’ve seen it printed: paraphrased idea
from Gene Kim: “You don’t get reliability by heroics; you get it by designing for fast, safe changes.”
Interesting facts and history that still matter
Gentoo’s culture has always been shaped by constraints: bandwidth, CPU time, and the desire to understand what’s running. A few concrete bits of history help explain why today’s best practices look the way they do:
- Gentoo’s name comes from the Gentoo penguin, often described as the fastest swimmer among penguins—yes, the branding has always been on-the-nose.
- Portage’s DNA was inspired by the BSD ports system: recipes (ebuilds) build software from source with options (USE flags).
- USE flags were an early, practical solution to the “everything depends on everything” problem: compile in features you need, skip those you don’t.
- Stage tarballs originally existed because bootstrapping a full toolchain on slow hardware could be punishing; stage3 became the sane default for most installs.
- Gentoo’s performance myth (that compiling everything with
-O3makes your system magically faster) has been wrong since before multicore CPUs were common. It persists anyway, like a bad office espresso machine. - Binary packages have existed for a long time in Gentoo-land, but the modern “hybrid” workflow—compile what you must, install binaries when you can—has become mainstream because patch velocity matters.
- OpenRC became Gentoo’s default init system for years because it’s simple, transparent, and friendly to incremental changes; Gentoo also supports systemd if you want the ecosystem benefits.
- Gentoo Hardened popularized security-focused toolchain and kernel settings in a way that influenced “secure by default” thinking across distros.
- Cross-compiling and distcc were “cloud build” before cloud build was fashionable—teams built packages on beefy boxes to deploy on smaller systems.
Big decisions up front (and what I recommend)
1) Firmware and boot: UEFI, always (unless you have a museum)
UEFI isn’t perfect, but the tooling and expectations in 2026 assume it. Use GPT, keep a clean ESP, and pick a bootloader that your future self can debug at 3 a.m.
Recommendation: UEFI + GPT + a dedicated EFI System Partition (ESP) + GRUB or systemd-boot. If you want ZFS/Btrfs snapshots and kernel rollbacks, GRUB tends to be more flexible.
2) Init system: OpenRC vs systemd
This is not a morality play. It’s a dependency graph question.
- Pick OpenRC if you want minimalism, transparency, and fewer moving parts. It’s excellent for servers and for people who actually read logs.
- Pick systemd if you need first-class support from upstream projects, want standardized unit semantics, or you’re integrating with tooling that assumes it.
Recommendation: For a personal workstation, either is fine. For a fleet, pick the one your team can support consistently. Mixed-init fleets are how you get “works on host A” as a permanent lifestyle.
3) Filesystem: pick based on failure modes, not vibes
Gentoo won’t save you from storage physics. Your filesystem choice should reflect your needs for snapshots, bitrot protection, recovery, and predictable performance.
- ext4: boring, fast enough, reliable. If you don’t need snapshots, this is still a great default.
- XFS: strong for large files and parallel IO, great on servers; less friendly to tiny-root “tinker” installs.
- Btrfs: snapshots, compression, subvolumes. Great if you actually test restores and understand scrub balance.
- ZFS: the storage engineer’s comfort blanket—checksums, snapshots, send/receive. Also heavier integration and an out-of-tree kernel module reality.
Recommendation: For most single-disk or simple NVMe systems: ext4 for root, plus optional Btrfs for data if you want snapshots. For serious data integrity and snapshot workflows: ZFS, but treat it like a platform decision.
4) Compile strategy: local builds, distcc, or binaries
Compiling is a means, not a personality trait. Your job is to produce a system that stays patched and stable.
- Local builds are simplest and surprisingly fine on modern CPUs.
- distcc helps when you have multiple machines and big builds, but introduces network and toolchain consistency risks.
- Binary packages are the best speed win for rebuild time and patch throughput, if you manage them intentionally.
Recommendation: Enable binary packages early. Keep local source builds for the parts you need to tune or audit.
Joke #1: Compiling Chromium on a laptop is a great way to test your thermal paste and your patience in a single run.
Practical tasks with commands: measure, then decide
Below are real tasks you can run during install (from the live environment or after chroot). Each includes: the command, example output, what it means, and what decision you make.
Task 1: Confirm UEFI mode (don’t guess)
cr0x@server:~$ ls /sys/firmware/efi
efivars fw_platform_size runtime systab
What it means: If that directory exists, you booted in UEFI mode. If it doesn’t, you’re in legacy/CSM mode.
Decision: If you intended UEFI and it’s missing, reboot and fix firmware boot settings now. Don’t continue and “convert later.” Later is how you get a split-brain boot setup.
Task 2: Identify disks and topology (NVMe vs SATA, model, size)
cr0x@server:~$ lsblk -o NAME,TYPE,SIZE,MODEL,ROTA,TRAN
NAME TYPE SIZE MODEL ROTA TRAN
nvme0n1 disk 953.9G Samsung SSD 990 0 nvme
nvme0n1p1 part 512M 0
nvme0n1p2 part 128G 0
nvme0n1p3 part 825.4G 0
What it means: ROTA 0 indicates SSD/NVMe. TRAN tells you transport. This matters for scheduler and expectations.
Decision: Confirm you’re installing to the right disk. If you see multiple disks, label them physically or via serial before partitioning.
Task 3: Check drive health quickly (SMART/NVMe)
cr0x@server:~$ smartctl -a /dev/nvme0n1 | sed -n '1,25p'
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.0] (local build)
=== START OF INFORMATION SECTION ===
Model Number: Samsung SSD 990
Serial Number: S7X3NX0W123456
Firmware Version: 1B2QJXD7
NVMe Version: 1.4
Total NVM Capacity: 1,000,204,886,016 [1.00 TB]
...
What it means: You’re validating you’re not installing onto a dying device or a counterfeit drive with weird firmware.
Decision: If SMART shows media errors, critical warnings, or a high percentage used, replace the drive before building anything “optimized.”
Task 4: Partition with GPT (ESP + root + optional swap)
cr0x@server:~$ parted -s /dev/nvme0n1 mklabel gpt
cr0x@server:~$ parted -s /dev/nvme0n1 mkpart ESP fat32 1MiB 513MiB
cr0x@server:~$ parted -s /dev/nvme0n1 set 1 esp on
cr0x@server:~$ parted -s /dev/nvme0n1 mkpart root ext4 513MiB 100%
cr0x@server:~$ parted -s /dev/nvme0n1 print
Model: Samsung SSD 990 (nvme)
Disk /dev/nvme0n1: 954GB
Partition Table: gpt
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB ESP boot, esp
2 538MB 954GB 953GB root
What it means: Clean GPT with a proper ESP. Sizes are aligned. Flags are set.
Decision: If you want full-disk encryption, you’d typically put LUKS on partition 2 and build your filesystem inside it. Decide now; retrofitting encryption later is surgery.
Task 5: Format filesystems with the right knobs
cr0x@server:~$ mkfs.vfat -F 32 -n EFI /dev/nvme0n1p1
mkfs.fat 4.2 (2021-01-31)
cr0x@server:~$ mkfs.ext4 -L gentoo-root /dev/nvme0n1p2
mke2fs 1.47.0 (5-Feb-2023)
Creating filesystem with 244190208 4k blocks and 61054976 inodes
Filesystem UUID: 8d3c1b5c-0e7d-4c6d-9a61-9a0d7b2e3c91
...
What it means: Labels and UUIDs are created. You’ll use UUIDs in fstab to survive device renames.
Decision: If you’re using SSDs, consider enabling periodic fstrim later. Don’t mount with weird legacy flags unless you can explain them.
Task 6: Mount layout (simple beats clever)
cr0x@server:~$ mount /dev/nvme0n1p2 /mnt/gentoo
cr0x@server:~$ mkdir -p /mnt/gentoo/boot
cr0x@server:~$ mount /dev/nvme0n1p1 /mnt/gentoo/boot
cr0x@server:~$ findmnt -R /mnt/gentoo
TARGET SOURCE FSTYPE OPTIONS
/mnt/gentoo /dev/nvme0n1p2 ext4 rw,relatime
/mnt/gentoo/boot /dev/nvme0n1p1 vfat rw,relatime,fmask=0022,dmask=0022
What it means: You can see exactly what’s mounted where. No surprises.
Decision: If you plan separate /var or /home, do it for a reason (IO isolation, snapshots, quotas), not because you read a 2009 forum post.
Task 7: Validate network and DNS (builds fail in dumb ways)
cr0x@server:~$ ip -brief addr
lo UNKNOWN 127.0.0.1/8 ::1/128
enp4s0 UP 192.168.10.24/24 fe80::3a2f:.../64
cr0x@server:~$ getent hosts distfiles.gentoo.org
192.250.249.52 distfiles.gentoo.org
What it means: You have an IP and DNS resolution works.
Decision: If DNS fails, fix it before you blame Portage. A surprising amount of “Gentoo is broken” is actually “my resolver is haunted.”
Task 8: Time sync (TLS and mirrors hate wrong clocks)
cr0x@server:~$ date
Wed Feb 5 12:44:02 UTC 2026
cr0x@server:~$ timedatectl status | sed -n '1,8p'
Local time: Wed 2026-02-05 12:44:10 UTC
Universal time: Wed 2026-02-05 12:44:10 UTC
RTC time: Wed 2026-02-05 12:44:09
Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
What it means: Your clock won’t break TLS downloads or signed metadata verification.
Decision: If clock sync is off, fix NTP now. Broken time yields bizarre, intermittent failures—exactly the kind you’ll misdiagnose as “compiler issues.”
Task 9: Extract stage3 and verify you didn’t mount wrong
cr0x@server:~$ tar xpf stage3-amd64-openrc-*.tar.xz --xattrs-include='*.*' --numeric-owner -C /mnt/gentoo
cr0x@server:~$ ls /mnt/gentoo | head
bin
boot
dev
etc
home
lib
lib64
What it means: The filesystem is populated. If boot is empty but mounted, good. If it’s not mounted, you just unpacked into the wrong place and you’ll feel it later.
Decision: If you forgot to mount the ESP at /mnt/gentoo/boot before extraction, fix mounts and continue—don’t reinstall. But do verify what landed where.
Task 10: Chroot prep (bind mounts and resolv.conf)
cr0x@server:~$ mount --types proc /proc /mnt/gentoo/proc
cr0x@server:~$ mount --rbind /sys /mnt/gentoo/sys
cr0x@server:~$ mount --make-rslave /mnt/gentoo/sys
cr0x@server:~$ mount --rbind /dev /mnt/gentoo/dev
cr0x@server:~$ mount --make-rslave /mnt/gentoo/dev
cr0x@server:~$ cp -L /etc/resolv.conf /mnt/gentoo/etc/
cr0x@server:~$ chroot /mnt/gentoo /bin/bash
cr0x@server:/# source /etc/profile
cr0x@server:/# env | grep -E 'CHOST|CFLAGS|MAKEOPTS' || true
What it means: Your chroot behaves like a system, with devices and proc/sys mounted.
Decision: If DNS fails inside the chroot, /etc/resolv.conf is wrong. Fix it before emerging anything large.
Task 11: CPU and memory reality check (set MAKEOPTS accordingly)
cr0x@server:/# nproc
16
cr0x@server:/# free -h
total used free shared buff/cache available
Mem: 62Gi 2.1Gi 54Gi 175Mi 6.0Gi 59Gi
Swap: 0B 0B 0B
What it means: You have 16 threads and plenty of RAM. Builds will be CPU-bound, not memory-bound.
Decision: Set MAKEOPTS="-j16" (or -j15 if you want responsiveness). If you had 8GB RAM, you’d be more conservative, especially with LLVM-heavy builds.
Task 12: Measure compile bottleneck (CPU vs IO)
cr0x@server:/# emerge --info | sed -n '1,40p'
Portage 3.0.67 (python 3.12.7-final-0, default/linux/amd64/23.0, gcc-14.1.0, glibc-2.40, 6.6.0-gentoo x86_64)
System uname: Linux-6.6.0-gentoo-x86_64-Intel-R-_Core-TM-_i9-12900K-with-glibc2.40
KiB Mem: 65923304 total, 57213448 free
KiB Swap: 0 total, 0 free
...
What it means: You’re capturing baseline toolchain and profile info. This output becomes your “build receipt.”
Decision: Save this in your build notes. If performance changes later, compare. Gentoo makes drift easy; your job is to make drift obvious.
Task 13: Check Portage repository sync health
cr0x@server:/# emaint sync -a
=== Syncing repositories...
gentoo
Syncing with rsync mirror...
Successfully synced repository gentoo
What it means: Your tree is up to date. If sync fails, you’ll chase dependency ghosts.
Decision: If rsync is flaky in your environment, switch to a method that works reliably for you, and standardize it across machines.
Task 14: Verify you’re using the right profile (don’t wing this)
cr0x@server:/# eselect profile list | sed -n '1,14p'
Available profile symlink targets:
[1] default/linux/amd64/23.0 (stable)
[2] default/linux/amd64/23.0/systemd (stable)
[3] default/linux/amd64/23.0/desktop (stable)
[4] default/linux/amd64/23.0/desktop/systemd (stable)
cr0x@server:/# eselect profile set 1
What it means: Profiles drive defaults: USE flags, masks, system packages. Picking one is choosing the operating model.
Decision: If you want systemd, choose it now. Changing later is doable, but it’s a migration project, not a toggle.
Task 15: Install kernel sources and pick a kernel approach
cr0x@server:/# emerge --ask sys-kernel/gentoo-sources
[ebuild N ] sys-kernel/gentoo-sources-6.6.74 USE="-experimental" 0 KiB
Would you like to merge these packages? [Yes/No] yes
What it means: You’re installing the kernel sources used by Gentoo’s patchset.
Decision: If you need minimal fuss, consider a distribution kernel package (where available) or a known-good config. If you need custom, do it—but keep the config tracked and reproducible.
Task 16: Confirm firmware needs before reboot (Wi-Fi is the classic trap)
cr0x@server:/# lspci -nn | grep -Ei 'network|ethernet|wifi'
02:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3]
03:00.0 Network controller [0280]: Intel Corporation Wi-Fi 6E AX211 [8086:7a70]
What it means: You can predict whether you’ll need linux-firmware packages for networking after reboot.
Decision: If you see Wi‑Fi hardware, plan to install firmware before you cut over. Otherwise, your first boot becomes a networking scavenger hunt.
Storage engineering choices: ext4, XFS, Btrfs, ZFS
Storage is where performance tuning goes to die if you don’t understand the failure mode. For Gentoo, storage choices also impact rebuild speed, because builds are write-heavy: unpacking tarballs, compiling objects, linking, and writing packages.
ext4: the default that doesn’t make you interesting
ext4 is still the best answer for “I need this to boot, update, and survive.” It performs well on NVMe, it recovers predictably, and it doesn’t demand a PhD to scrub metadata.
Operationally, ext4 shines because when something goes wrong, you can usually fix it with standard tools and a clear mental model. That matters more than theoretical peak throughput.
Btrfs: snapshots and compression, but you must operate it
Btrfs is attractive for Gentoo because snapshots are a safety net: you can snapshot / before a risky @world upgrade and roll back if the kernel or libc gets spicy. Compression can reduce IO during builds, especially on slower SSDs.
The catch: Btrfs needs routine hygiene. If you use it, you should actually run scrub, check device errors, and understand subvolume layouts. A filesystem with features is a filesystem with responsibilities.
ZFS: integrity and rollback as a lifestyle
ZFS is for people who want strong checksumming, snapshots, and replication workflows. It’s excellent in production when operated properly. But it’s not “set and forget.” You’re choosing an ecosystem: kernel module compatibility, initramfs integration, pool layout decisions, and monitoring.
If you’re building a workstation and you don’t need ZFS’s integrity model, don’t use it because you saw a screenshot of a cool snapshot list. Use it because you want consistent, testable rollback and data protection.
One practical storage performance rule for Gentoo builds
If your builds feel slow, don’t start by tuning compiler flags. Start by asking: are you IO-bound by unpacking and writing small files? If yes, filesystem and storage latency dominate. NVMe helps. So does keeping PORTAGE_TMPDIR on fast local storage and avoiding network-mounted build directories.
Kernel strategy: generic, custom, and the “don’t be clever” middle
The kernel is where Gentoo newcomers either learn discipline or learn pain. You have three viable strategies:
Strategy A: Use a generic/distribution kernel
This is the best approach when you care about speed-to-working-system. You get broad hardware support, fewer surprises, and easier upgrades. If you’re deploying a fleet, standardize here.
Strategy B: Minimal custom kernel
This is the “don’t be clever” middle: start from a known-good config (often defconfig or a distro config), enable only what you need (filesystems, NVMe, network), and keep it reproducible. The goal is not a tiny kernel; the goal is a kernel you can rebuild and debug.
Strategy C: Aggressive custom kernel
Do this when you know why you’re doing it: latency-sensitive workloads, embedded constraints, or hard security requirements. If your rationale is “it will be faster,” stop. Most performance comes from the scheduler, IO path, and memory behavior—not from removing random drivers.
Joke #2: A custom kernel is like a tattoo: it can be meaningful, but “I was bored” isn’t the best justification.
Portage setup for speed without lies
Your Portage configuration is an interface between your hardware and your time. The goal isn’t “max optimization.” The goal is predictable builds, quick updates, and enough speed that you don’t postpone patches.
Make.conf: sane defaults
In 2026, the biggest mistakes are still classics: over-aggressive CFLAGS, over-parallelizing builds, and enabling every USE flag because “features are good.” Features are dependencies. Dependencies are attack surface and rebuild time.
What I recommend:
- Keep CFLAGS conservative.
-O2 -pipeand your correct-marchare usually fine. Avoid-Ofastunless you can validate correctness for your workload. - Use
MAKEOPTSbased on memory. Threads are cheap until they aren’t. When linking, memory spikes happen. - Use
FEATURES="buildpkg"early. Binary packages are a time machine. You’ll thank yourself when you need to reinstall or roll back. - Pin your global USE flags to reality. Enable what you use. If you’re not running Bluetooth, don’t compile Bluetooth across the world set.
Compiler choice: GCC is fine, LLVM is fine—just be consistent
Toolchain variance causes weirdness. Pick a compiler toolchain and standardize it across your systems if you share binaries. If you build binaries on host A and install on host B, align CPU targets and toolchain versions, or you’ll collect “illegal instruction” crashes like souvenirs.
ccache: good servant, bad master
ccache can cut rebuild time dramatically, especially for iterative kernel and large C/C++ builds. But it’s not magic. It uses disk. It can mask problems if you don’t invalidate when you should. Use it with monitoring and limits.
Binary packages: the grown-up compromise
Here’s the uncomfortable truth: the best “Gentoo performance optimization” is not a compiler flag. It’s a workflow. Binary packages let you rebuild once and deploy many times, or at least reinstall without paying the full compile tax again.
In practice, binary packages help with:
- Fast recovery. If a disk dies, you reinstall and pull binaries from your cache or repository.
- Safe upgrades. If an upgrade breaks userland, rolling back is easier when you can reinstall known-good binaries.
- Fleet consistency. If you run more than one Gentoo machine, consistent binaries reduce drift.
The tradeoff is operational: you need to store packages somewhere, manage signatures if you care, and keep ABI compatibility in mind. Worth it.
Three corporate mini-stories from the trenches
Mini-story 1: The incident caused by a wrong assumption
At a mid-sized company with a mixed Linux fleet, a team decided to introduce Gentoo for a latency-sensitive service. They built a beautiful custom kernel, tuned the CPU flags, and pushed it to production. Everything looked fine—until the first routine update cycle.
The wrong assumption was subtle: they assumed “same CPU family” meant “same instruction set.” The build host had newer CPUs supporting additional instructions. The production hosts were similar brand and generation-ish, but not identical. Binary packages built with an aggressive -march ran fine in staging (same hardware as the build box) and crashed in production with Illegal instruction.
On-call initially suspected memory corruption or a bad compiler. The crash signatures were inconsistent because different processes hit different code paths. Logs were noisy. The service was flapping under load.
The fix was boring: rebuild with a conservative target aligned to the oldest CPU in the fleet, and introduce a CI gate that runs a small instruction-set validation test on representative hardware before rollout.
The lesson stuck: portability isn’t free, and “it boots” isn’t evidence. If you’re going to ship binaries across machines, you need a hardware compatibility contract.
Mini-story 2: The optimization that backfired
A different org tried to “speed up builds” by placing Portage’s temp directory on a network filesystem shared across builders. The idea was to centralize IO and reuse intermediate artifacts. On paper, it sounded efficient: shared storage, big cache, fewer duplicates.
In practice, it was a latency disaster. Builds create and delete mountains of small files. Metadata operations dominated. The network filesystem did what network filesystems do under that pattern: it became a distributed lock simulator.
Worse, when the storage team performed routine maintenance, the shared filesystem experienced brief stalls. Local builds hung. CI jobs piled up. Developers assumed Gentoo “couldn’t scale,” which was unfair, because the bottleneck wasn’t Gentoo. It was their IO architecture.
The rollback was simple: keep PORTAGE_TMPDIR local on NVMe for each builder, store final binary packages on shared storage, and rely on ccache plus distfiles mirroring rather than NFS’ing the build scratch space.
The lesson: optimization without a clear bottleneck is just an expensive way to move problems around.
Mini-story 3: The boring practice that saved the day
A regulated enterprise ran Gentoo on a handful of internal tooling servers—nothing glamorous, mostly build orchestration and artifact management. Their approach wasn’t fancy. It was disciplined.
Every config lived in version control: /etc/portage, kernel configs, bootloader configs, fstab, even a small “post-install” script that applied sysctl settings and service enables. When a system was installed, it wasn’t “hand configured.” It was “applied.”
One day, an overconfident change to global USE flags caused a rebuild that subtly altered dependency selection. A service started failing to start after reboot because a plugin was no longer built. This could have become a long blame session.
Instead, they did a clean rollback by checking out the previous known-good config commit, rebuilding binaries in CI, and redeploying. They were back in service before the business side had time to call it an outage.
The lesson: operational excellence is mostly paperwork and habits. It’s unsexy, and it works.
Fast diagnosis playbook: find the bottleneck in minutes
When your Gentoo install or build feels slow, don’t start “tuning.” Start diagnosing. Here’s the order that saves time.
First: Are you waiting on the network?
- Symptoms:
emergestalls during fetch, sync takes ages, intermittent timeouts. - Check: DNS resolution, mirror reachability, packet loss.
cr0x@server:~$ ping -c 3 distfiles.gentoo.org
PING distfiles.gentoo.org (192.250.249.52) 56(84) bytes of data.
64 bytes from 192.250.249.52: icmp_seq=1 ttl=49 time=23.4 ms
64 bytes from 192.250.249.52: icmp_seq=2 ttl=49 time=24.1 ms
64 bytes from 192.250.249.52: icmp_seq=3 ttl=49 time=22.9 ms
--- distfiles.gentoo.org ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
Decision: If packet loss is non-zero or latency is unstable, fix network first or switch mirrors. Don’t “just retry” and call it a day.
Second: Are you IO-bound?
- Symptoms: CPU usage is low during builds, lots of waiting, system feels “stuttery.”
- Check: disk utilization, IO wait, filesystem health.
cr0x@server:~$ iostat -xz 1 3
Linux 6.6.0-gentoo (server) 02/05/26 _x86_64_ (16 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
12.30 0.00 4.10 28.70 0.00 54.90
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz aqu-sz %util
nvme0n1 45.0 2100.0 0.0 0.0 1.2 46.7 980.0 22000.0 0.0 0.0 16.5 22.4 16.3 98.0
Decision: High %iowait and near-100% disk %util means your storage is the limiter. Move build temp to faster storage, enable compression if appropriate, or reduce parallelism.
Third: Are you CPU or memory bound?
- Symptoms: CPU pegged at 100%, fans scream, link steps OOM, builds crash mysteriously.
- Check: load average vs CPU count, memory pressure, swapping.
cr0x@server:~$ uptime
12:58:02 up 1:14, 1 user, load average: 28.12, 23.55, 18.42
cr0x@server:~$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
18 2 0 1520000 110000 4200000 0 0 120 2200 4200 9000 65 8 10 17 0
Decision: If load far exceeds CPU threads and wa is high, it’s IO. If si/so swap activity appears, you’re memory constrained—reduce -j or add swap.
Common mistakes: symptom → root cause → fix
1) “It won’t boot after install”
Symptom: Boot drops to firmware menu or says “no bootable device.”
Root cause: Installed in legacy mode but expected UEFI, ESP not mounted at install time, or bootloader not installed to the ESP.
Fix: Boot the live media in UEFI mode, mount the ESP at /boot, reinstall bootloader, verify NVRAM entries.
cr0x@server:~$ efibootmgr -v
BootCurrent: 0001
Timeout: 1 seconds
BootOrder: 0001,0003
Boot0001* gentoo HD(1,GPT,...)File(\EFI\gentoo\grubx64.efi)
Decision: If there’s no entry for Gentoo, create/reinstall it. If the entry exists but points to the wrong path, fix the bootloader install path.
2) “emerge is slow and CPU is idle”
Symptom: Build crawls, CPU usage low, disk light solid.
Root cause: IO bottleneck, often due to slow disk, bad filesystem choice for workload, or build dir on network storage.
Fix: Put PORTAGE_TMPDIR on fast local SSD/NVMe, ensure no weird mount options, reduce MAKEOPTS if you’re saturating IO with parallel writes.
3) “Random build failures that go away on retry”
Symptom: Compilation fails nondeterministically.
Root cause: Unstable RAM, overheating CPU, flaky storage, or aggressive overclocking; sometimes a misbehaving ccache.
Fix: Run memory tests, check thermals, check SMART, disable overclock, clear ccache.
cr0x@server:~$ dmesg -T | tail -n 12
[Wed Feb 5 13:10:22 2026] mce: [Hardware Error]: CPU 7: Machine Check: 0 Bank 5: bea0000000000108
[Wed Feb 5 13:10:22 2026] mce: [Hardware Error]: TSC 0 ADDR fef1a140 MISC d012000100000000
Decision: If you see MCEs, stop “debugging Gentoo” and start debugging hardware.
4) “World update wants to rebuild half the planet”
Symptom: Huge rebuild after a small change.
Root cause: Global USE flag changes, profile change, ABI break (compiler/glibc), or switching Python slots.
Fix: Make changes intentionally, review the depgraph, and prefer per-package USE flags for niche features. Use binary packages to make rebuilds less painful.
cr0x@server:~$ emerge -pvuDN @world | head -n 18
These are the packages that would be merged, in order:
Calculating dependencies... done!
[ebuild U ] sys-libs/zlib-1.3.1 [1.3] USE="minizip -static-libs"
[ebuild R ] media-libs/libpng-1.6.43 USE="apng -static-libs"
...
Decision: If the list is enormous, stop and ask what changed. Don’t approve a rebuild you can’t explain.
5) “Wi‑Fi worked in live media but not after reboot”
Symptom: No wireless interface, or firmware load errors in dmesg.
Root cause: Missing firmware packages in the installed system.
Fix: Install firmware and ensure kernel config includes the right drivers.
cr0x@server:~$ dmesg -T | grep -i firmware | tail
[Wed Feb 5 14:02:11 2026] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-83.ucode failed with error -2
Decision: If you see failed with error -2, it’s missing firmware. Install it before touching network configs.
Checklists / step-by-step plan
Phase 0: Preflight (do this before touching disks)
- Boot live media in UEFI mode (
/sys/firmware/efiexists). - Confirm correct disk with
lsblkand model/size. - Check drive health (SMART/NVMe summary).
- Confirm network and DNS resolution.
- Sync clock (NTP active).
Phase 1: Disk layout (keep it simple)
- Create GPT.
- Create ESP (512MiB is fine).
- Create root partition (or LUKS container).
- Format ESP as FAT32; format root as ext4/Btrfs/ZFS as chosen.
- Mount root to
/mnt/gentoo, ESP to/mnt/gentoo/boot.
Phase 2: Base system (stage3 + chroot)
- Extract stage3 with xattrs.
- Mount
/proc,/sys,/devbind mounts. - Copy
resolv.conf. - Chroot and load environment.
- Sync repositories and select profile.
Phase 3: Build system choices (make it fast to rebuild)
- Set conservative
CFLAGS, saneMAKEOPTS. - Enable binary packages (
FEATURES="buildpkg") and decide where you store them. - Optionally enable
ccacheand cap its size. - Install kernel sources and pick kernel strategy.
- Install firmware packages relevant to your hardware.
Phase 4: Boot and first reboot (where installs go to die)
- Create
/etc/fstabusing UUIDs. - Install and configure bootloader for UEFI.
- Generate initramfs if needed (encryption, ZFS, complex storage).
- Set hostname, networking, time zone, and users.
- Reboot and validate: boot logs, network, storage mounts.
FAQ
1) Is Gentoo still worth installing in 2026?
Yes, if you care about control, reproducibility, and long-term performance tuning. No, if you want a system that hides complexity and makes decisions for you.
2) Should I use -march=native?
On a single machine that will never share binaries, it’s fine. On a fleet or any workflow that installs binaries across hosts, it’s a trap unless hardware is identical.
3) OpenRC or systemd: what’s the operational difference?
OpenRC is transparent and lightweight; systemd integrates tightly with modern Linux userland expectations. Pick one and standardize. The worst init system is “both, depending on the host.”
4) Do I need swap on a modern machine?
If you compile large C++ projects, having some swap can prevent catastrophic OOM kills during linking. Even a small swapfile is cheap insurance, especially on systems with 16GB RAM or less.
5) What filesystem is best for Gentoo performance?
On NVMe, ext4 is hard to beat for low-friction speed. Btrfs can be great with compression and snapshots if you operate it. ZFS is excellent for integrity and rollback workflows but comes with integration overhead.
6) How do I make @world updates less painful?
Use binary packages, avoid casual global USE changes, keep your profile stable, and review the planned merge list before you hit enter.
7) What’s the fastest safe way to speed up builds?
Start with binaries + ccache + sane parallelism. Hardware helps too: more RAM and fast NVMe often beat any compiler-flag gymnastics.
8) Why do my builds fail only on one machine?
Usually hardware instability, thermal throttling, instruction set mismatch, or a divergent toolchain. Check MCE logs, CPU flags, and toolchain versions before blaming the ebuild.
9) Can I do Gentoo on laptops without suffering?
Yes: use binary packages aggressively, avoid compiling on battery, and don’t treat your laptop as a build farm unless you like heat and fan noise.
10) What should I keep in version control?
/etc/portage, kernel configs, bootloader configs, and any post-install scripts or sysctl settings. If it changes behavior, it belongs in a repo.
Conclusion: next steps you can do today
A good Gentoo install isn’t a flex. It’s an operational asset: a system you can rebuild, patch, and tune without superstition. The main move is to stop treating the install like a one-time ritual and start treating it like a build product.
Next steps that pay off immediately:
- Commit your
/etc/portageand kernel config to version control. - Enable binary packages and decide where they live (local cache, NAS, or artifact server).
- Run the fast diagnosis playbook once on a big build to learn whether you’re CPU-, IO-, or network-bound.
- Pick one optimization you can measure (ccache hit rate, build time, IO wait) and improve it without breaking reproducibility.
If you do that, you’ll get the real Gentoo benefit: not “compiled,” but controlled. And faster forever, because you can prove what changed.