Smoothness Isn’t FPS: Frame Time Explained in 2 Minutes

Was this helpful?

You’re getting 144 FPS. The counter is smug. The game still feels like it’s stepping on LEGO every few seconds.
You lower settings. You update drivers. You sacrifice a goat to the patch notes. Still stutter.

Here’s the fix for your mental model: smoothness isn’t “high FPS.” Smoothness is “consistent frame time.”
If you remember one metric after this, make it frame time.

Frame time in two minutes (the only definition that matters)

FPS is a rate: frames per second. Frame time is a duration: how long one frame took to produce.
Your brain experiences duration more directly than it experiences rate. That’s the whole story.
Everything else is details and blame assignment.

Convert FPS to frame time

Frame time (milliseconds) ≈ 1000 / FPS.

  • 60 FPS → ~16.67 ms per frame
  • 120 FPS → ~8.33 ms per frame
  • 144 FPS → ~6.94 ms per frame
  • 240 FPS → ~4.17 ms per frame

If every frame arrives exactly every 6.94 ms at 144 FPS, motion looks butter-smooth.
If most frames arrive every 6–7 ms but every few seconds one frame takes 40 ms, you’ll feel a hitch.
The average FPS might still be “high.” Your experience won’t be.

What the frame time graph really means

A frame time graph is just a timeline of “how late was this frame.”
Flat line good. Spikes bad. Sawtooth patterns are usually pacing issues, queueing, or synchronization.
Random peppered spikes are often background work (compilation, IO, GC, antivirus scans, telemetry, overlays).

“But my FPS counter says 140!” Sure. If you take 139 frames at 7 ms and 1 frame at 40 ms, the average still looks good.
Your eyes will remember the 40 ms betrayal.

Joke #1: FPS counters are like quarterly dashboards—great at averaging away the pain until someone screams in the meeting.

Why FPS lies (and why your eyes don’t)

FPS is typically sampled and averaged over a window (sometimes a full second).
Frame time is per-frame. Per-frame is where stutter lives.
If you’re troubleshooting “it feels off,” you don’t want the average. You want the worst moments.

Three numbers that matter more than average FPS

  • 1% low FPS: the FPS at the slowest 1% of frames. Better indicator of stutter than average.
  • 0.1% low FPS: the slowest 0.1%. This catches the “once every minute I hitch” problem.
  • Frame time percentile: e.g., 99th percentile frame time. That’s “how bad is bad.”

What you’re really noticing: variance

Humans detect irregularity. A steady 60 FPS can feel smoother than a wildly swinging 90–180 FPS.
The “feel” is largely about frame pacing consistency: time between frames being uniform.

Why “more FPS” can make things feel worse

Uncapped FPS can slam the CPU render thread, create larger queues, and increase latency. Then you get:

  • More heat → throttling → periodic frame time spikes
  • More contention → OS scheduling jitter
  • More power draw → GPU boost oscillations (especially on laptops)
  • More swapchain and sync weirdness → uneven pacing

The counter climbs, the game feels worse, and you start gaslighting yourself. Don’t.
Cap your FPS to a sustainable value and watch frame times flatten.

Interesting facts and history you can actually use

  1. VSync was originally about tearing, not “smoothness.” It syncs presentation to the display refresh,
    but can introduce hitching if frames miss the deadline and wait a whole refresh cycle.
  2. “1% lows” became popular because averages hid stutter. Benchmarkers started reporting lows once high-FPS hardware made
    “average” meaningless for perceived quality.
  3. Early 3D games were often CPU-limited, not GPU-limited. Transform and lighting used to be done on the CPU
    before GPUs took over, so stutter patterns looked different: heavy simulation spikes rather than shader spikes.
  4. Shader compilation stutter is a modern classic. Engines that compile shaders on demand can hitch when you first see a new effect.
    Caching helps, but only if the pipeline is implemented and persisted correctly.
  5. VR made frame pacing non-negotiable. VR discomfort forced the industry to treat missed frame deadlines as a first-class failure,
    not an aesthetic nuisance.
  6. Frame pacing bugs existed even when FPS was “fine.” Some older drivers and game engines produced uneven presentation
    (multi-GPU AFR was notorious), leading to “microstutter” despite high average FPS.
  7. Modern OS schedulers can create jitter under load. Background tasks, power management, and interrupt storms can steal
    time slices at the wrong moment and show up as frame time spikes.
  8. Storage got fast, but IO stalls didn’t vanish. NVMe reduced average load times, but a single blocked IO on the wrong thread
    can still cause a visible hitch if the engine isn’t designed to stream correctly.

Where frame time goes to die: common bottleneck patterns

Stutter is rarely “one thing.” It’s usually one critical path that occasionally blocks.
Think like an SRE: we don’t fix “latency.” We fix the tail latency, and we find the dependency that owns it.

CPU-bound: the main thread is late

Symptoms: GPU usage is low or fluctuates, CPU one core pegged, frame time spikes during AI, physics, world streaming,
garbage collection, or heavy draw-call scenes.

Practical approach: reduce simulation cost, reduce draw calls, cap FPS, or move work off the main thread.
If you’re a player, you can’t rewrite the engine, but you can change settings that hit CPU: view distance, crowd density,
physics/particles, ray tracing BVH builds (yes, those can also hurt CPU).

GPU-bound: rendering is late

Symptoms: GPU usage high and stable, frame time rises with resolution and heavy effects, spikes correlate with explosions,
volumetrics, RT, or post-processing. The graph may still spike if clocks throttle.

Practical approach: drop resolution, disable the expensive effects first (RT, volumetrics, shadows), use DLSS/FSR/XeSS if available,
and avoid uncapped FPS if it causes thermal throttling.

Frame pacing / sync problems: the pipeline is uneven

Symptoms: average FPS high, but frame time graph shows periodic spikes at a regular interval (e.g., every second or every refresh multiple).
Often tied to VSync, triple buffering, borderless window mode, overlays, capture software, or bad frame limiters.

IO and storage: the hitch you can’t “lower graphics” away

Symptoms: spikes when entering new areas, turning quickly, opening menus, or after long play sessions.
Disk queue rises, page faults spike, and the system feels “sticky” across apps.

Causes: asset streaming stalls, shader cache writes, insufficient RAM leading to paging, antivirus scanning game files,
or a drive doing background maintenance. Fast storage helps, but bad thread architecture makes fast storage irrelevant.

Network and server tick: online games can stutter too

Not all “stutter” is frame time. Packet loss and jitter can look like hitching because player motion snaps or rubber-bands.
Distinguish render stutter (frame time spikes) from simulation/network stutter (frame time steady, but motion inconsistent).

Fast diagnosis playbook (first/second/third checks)

When you’re under pressure—tournament night, demo for leadership, “why does this workstation feel awful”—you need triage.
This is the shortest path to a useful answer.

First: confirm it’s frame time, not network or perception

  1. Enable a frame time graph (in-game, overlay, or capture tool).
  2. Reproduce the hitch three times in the same spot/action.
  3. If the graph spikes with the hitch: it’s render/simulation latency. If it doesn’t: suspect network or input issues.

Second: decide if you’re CPU-bound or GPU-bound

  1. Watch GPU utilization and clocks during the hitch.
  2. If GPU usage is low when frame time is high: CPU or sync/driver stall.
  3. If GPU usage is pegged and frame time rises with resolution: GPU-bound.

Third: eliminate “external sabotage”

  1. Disable overlays (capture, chat, performance, RGB software).
  2. Check for background tasks: antivirus scans, updates, indexing, telemetry.
  3. Check storage health and paging: low RAM and page faults are hitch factories.

Fourth: apply the fastest stabilizers

  • Cap FPS (in-game limiter preferred). Target: slightly below refresh (e.g., 141 on 144 Hz).
  • Enable VRR (G-Sync/FreeSync) if supported; pair with sane caps.
  • Pick a power plan that doesn’t downclock aggressively under transient load.

If you do those four steps and still have spikes, you’re not “missing a setting.” You’re dealing with a content/engine/driver issue,
or with hardware/OS instability. That’s when you gather evidence instead of toggling checkboxes blindly.

Practical tasks: 14 real commands, what they mean, what you decide

These are written like an SRE runbook because that’s what performance troubleshooting is: an incident with graphs,
hypotheses, and containment. Most commands are Linux-flavored because they’re reproducible and honest.
If you’re on Windows, the mental model still maps: you’re looking for CPU saturation, GPU stalls, IO queueing, and memory pressure.

Task 1: Confirm refresh rate and current mode

cr0x@server:~$ xrandr --current
Screen 0: minimum 8 x 8, current 2560 x 1440, maximum 32767 x 32767
DP-1 connected primary 2560x1440+0+0 (normal left inverted right x axis y axis) 596mm x 335mm
   2560x1440     143.91*+
   1920x1080     143.85
HDMI-1 disconnected (normal left inverted right x axis y axis)

What it means: You’re actually running 143.91 Hz on DP-1. Good.
If you thought you were at 144 Hz but you’re at 60 Hz, no amount of tuning fixes “feels laggy.”

Decision: If refresh is wrong, fix display settings/cable/port before touching anything else.

Task 2: Check compositor / vsync path (Wayland/X11 clue)

cr0x@server:~$ echo $XDG_SESSION_TYPE
wayland

What it means: You’re on Wayland. Some games and overlays behave differently; capture tools and compositors can add latency or pacing artifacts.

Decision: If stutter is new after a session type change, test the other session type as a control.

Task 3: Live CPU pressure and per-core saturation

cr0x@server:~$ mpstat -P ALL 1 3
Linux 6.8.0 (host) 	01/13/2026 	_x86_64_	(16 CPU)

12:10:01 AM  CPU   %usr %nice %sys %iowait %irq %soft %steal %idle
12:10:02 AM  all   28.10  0.00  6.20   0.15 0.00  0.35   0.00 65.20
12:10:02 AM    0   92.00  0.00  5.00   0.00 0.00  0.00   0.00  3.00
12:10:02 AM    1   18.00  0.00  4.00   0.00 0.00  0.00   0.00 78.00

What it means: CPU0 is nearly pegged while others are idle-ish. That screams “main thread bound.”

Decision: Reduce CPU-heavy settings, cap FPS, and check for background interrupts or a single-thread bottleneck.

Task 4: Confirm memory pressure (paging causes hitches)

cr0x@server:~$ free -h
               total        used        free      shared  buff/cache   available
Mem:            31Gi        27Gi       1.2Gi       1.1Gi       3.0Gi       2.8Gi
Swap:           16Gi       7.8Gi       8.2Gi

What it means: Swap is actively used. Not necessarily fatal, but active swap during gameplay is a classic stutter cause.

Decision: Close memory hogs, reduce texture quality, or add RAM. If swap grows during play, treat it as a real incident.

Task 5: Watch major page faults in real time

cr0x@server:~$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 2  0 812345 1213456 102400 2800000  15   22   120   340 5200 8800 31  7 60  2  0
 3  1 813112 1189000 102400 2792000  80  110   900  1400 6400 9900 35  8 52  5  0

What it means: Non-zero si/so means swap in/out is happening. That’s IO on your memory path.

Decision: If swap activity correlates with hitches, stop treating graphics settings as the problem.

Task 6: Identify IO queueing ands and disk saturation

cr0x@server:~$ iostat -xz 1 3
Linux 6.8.0 (host) 	01/13/2026 	_x86_64_	(16 CPU)

Device            r/s     w/s   rkB/s   wkB/s  await  %util
nvme0n1          12.0    85.0   640.0  8200.0   6.50  78.00
sda               0.0     2.0     0.0    64.0  25.00   5.00

What it means: NVMe is busy (%util high). await ~6.5 ms is okay-ish, but spikes matter.
If you see await jumping into tens/hundreds of ms during hitches, IO is involved.

Decision: If IO is saturated, move the game to faster storage, exclude it from AV scanning, and check for background writes.

Task 7: Look for filesystem and block-layer latency spikes

cr0x@server:~$ sudo dmesg -T | tail -n 8
[Tue Jan 13 00:11:02 2026] nvme nvme0: I/O 123 QID 5 timeout, aborting
[Tue Jan 13 00:11:02 2026] nvme nvme0: Abort status: 0x371
[Tue Jan 13 00:11:03 2026] EXT4-fs warning (device nvme0n1p2): ext4_end_bio:343: I/O error 10 writing to inode 262145 starting block 12345678

What it means: That’s not “stutter,” that’s a storage incident. Timeouts and IO errors will manifest as nasty hitches and eventually data loss.

Decision: Stop benchmarking. Back up data. Check SMART, cables, thermals, firmware.

Task 8: Check NVMe SMART and thermal throttling indicators

cr0x@server:~$ sudo smartctl -a /dev/nvme0
SMART/Health Information (NVMe Log 0x02, NSID 0xffffffff)
Critical Warning:                   0x00
Temperature:                       79 Celsius
Available Spare:                   100%
Percentage Used:                   6%
Data Units Read:                   12,345,678
Data Units Written:                9,876,543
Warning Comp. Temperature Time:    0
Critical Comp. Temperature Time:   12

What it means: 79°C and non-zero “critical temperature time” suggests throttling events.
Throttling can create periodic frame time spikes due to IO latency ballooning.

Decision: Improve airflow, add heatsink, reposition the drive, update firmware if needed.

Task 9: Confirm GPU driver and basic GPU stats (NVIDIA example)

cr0x@server:~$ nvidia-smi --query-gpu=name,driver_version,utilization.gpu,clocks.sm,temperature.gpu,pstate --format=csv
name, driver_version, utilization.gpu [%], clocks.sm [MHz], temperature.gpu, pstate
NVIDIA GeForce RTX 4070, 550.54.14, 96 %, 2475 MHz, 73, P0

What it means: GPU is near pegged and in P0 (max performance state). Likely GPU-bound in this moment.

Decision: If frame time is high here, lower GPU-heavy settings or use an upscaler.

Task 10: Verify GPU throttling reasons (NVIDIA)

cr0x@server:~$ nvidia-smi -q -d PERFORMANCE | sed -n '1,80p'
==============NVSMI LOG==============
Performance State                          : P0
Clocks Throttle Reasons
    Idle                                   : Not Active
    Applications Clocks Setting             : Not Active
    SW Power Cap                            : Not Active
    HW Slowdown                             : Active
    HW Thermal Slowdown                     : Active
    Sync Boost                              : Not Active
    SW Thermal Slowdown                     : Not Active

What it means: Thermal slowdown is active. Your GPU is intermittently braking.
That produces a very specific “runs fine, then hitches” pattern.

Decision: Fix cooling, reduce power limit slightly, or cap FPS to reduce heat without killing responsiveness.

Task 11: Catch background CPU hogs when a hitch happens

cr0x@server:~$ top -b -n 1 | head -n 20
top - 00:12:44 up  2:31,  1 user,  load average: 4.21, 3.88, 3.51
Tasks: 329 total,   2 running, 327 sleeping,   0 stopped,   0 zombie
%Cpu(s): 36.3 us,  7.2 sy,  0.0 ni, 55.8 id,  0.5 wa,  0.0 hi,  0.2 si,  0.0 st
MiB Mem :  32154.0 total,   1204.0 free,  27680.0 used,   3270.0 buff/cache
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 8421 cr0x      20   0  9812m  5020m  132m R  168.0  15.6  12:31.22 game.bin
 2210 root      20   0  1240m  220m   34m S   45.0   0.7   0:40.11 tracker-miner-fs

What it means: The game is busy (expected), but so is a filesystem indexer. That’s a stutter accomplice.

Decision: Pause/disable indexing for the game library location. If corporate-managed, request an exclusion.

Task 12: Check interrupt storms (input lag and spikes)

cr0x@server:~$ cat /proc/interrupts | head
           CPU0       CPU1       CPU2       CPU3
  0:         45          0          0          0   IO-APIC   2-edge      timer
  1:          2          0          0          0   IO-APIC   1-edge      i8042
 24:    1423456    1322211    1209987    1187765   PCI-MSI 327680-edge      nvme0q0
 42:     983221     964112     955001     948887   PCI-MSI 524288-edge      nvidia

What it means: High interrupt rates are normal for NVMe/GPU, but if one CPU is drowning while others idle,
you can get scheduling jitter.

Decision: If you see pathological imbalance or sudden jumps, investigate driver issues, MSI/MSI-X settings, or kernel regressions.

Task 13: Check CPU frequency scaling (downclock spikes)

cr0x@server:~$ grep -H . /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor:powersave
/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq:800000

What it means: Governor is powersave and the CPU is at 800 MHz right now.
That’s great for battery life and terrible for frame time spikes.

Decision: Switch to a performance-oriented governor while gaming or when running latency-sensitive workloads.

Task 14: Apply a temporary performance governor (test, don’t “set and forget”)

cr0x@server:~$ sudo cpupower frequency-set -g performance
Setting cpu: 0
Setting cpu: 1
Setting cpu: 2
Setting cpu: 3
Setting cpu: 4
Setting cpu: 5
Setting cpu: 6
Setting cpu: 7
Setting cpu: 8
Setting cpu: 9
Setting cpu: 10
Setting cpu: 11
Setting cpu: 12
Setting cpu: 13
Setting cpu: 14
Setting cpu: 15

What it means: You’ve removed one major source of latency variance: aggressive downclocking.

Decision: If this improves frame pacing, create a per-profile solution (gaming mode) rather than running hot 24/7.

Three corporate mini-stories from the trenches

1) The incident caused by a wrong assumption: “FPS is high, so the workstation is fine”

A design team complained that their 3D viewport “felt sticky” on brand-new high-end workstations.
IT responded with the obvious: “The GPU is top-tier; the FPS counter is over 120; it’s not the machine.”
The ticket got bounced twice. Morale improved exactly zero percent.

An SRE-minded engineer sat down with one user and did the boring thing: looked at frame time, not FPS.
The graph was flat until they opened a heavy asset browser panel, then it spiked every few seconds like a metronome.
Not a random hitch. A periodic stall.

They correlated the spikes with disk activity. It turned out the asset cache directory was on a network-synced folder
mandated by policy. The sync agent woke up on a timer, scanned a pile of small files, and contended with the application’s own IO.
Average FPS stayed high because rendering was fine between stalls. The user experience was not.

The fix wasn’t “more GPU.” It was moving the cache to local NVMe and excluding it from sync,
while keeping final project files synced. The frame time spikes vanished.
The postmortem headline was blunt: stop using FPS as a proxy for responsiveness.

2) The optimization that backfired: chasing higher peak FPS and buying stutter

A small internal team ran a visualization wall for a customer briefing center. Someone noticed the app sometimes dipped below the display refresh.
A well-meaning engineer “optimized” by uncapping FPS and disabling sync to “let the GPU run free.”
The FPS counter shot up. The room clapped internally. Then customers started asking why the motion looked jittery.

The problem wasn’t raw throughput; it was pacing. Uncapped rendering created a deep queue of frames.
Input-to-photon latency increased, and the display saw uneven delivery because the compositor and swapchain were now juggling a flood of frames.
Small periodic stalls (garbage collection, logging flushes, telemetry) became visible as hard hitches because there was no consistent cadence.

The most humiliating part: the “optimization” also increased thermals. After fifteen minutes, the GPU throttled.
Frame time spiked hard right when the briefing reached the “wow moment.”

The fix was counterintuitive to the “more FPS” crowd: cap FPS slightly below refresh, enable VRR where possible, and keep buffering predictable.
Peak FPS decreased. Smoothness improved. The customers stopped squinting.

3) The boring but correct practice that saved the day: evidence-first performance triage

A production team responsible for a training simulator had a release candidate that “felt worse” than the previous build.
No crashes, no obvious regressions in average FPS. Just a pervasive sense of hitching when moving quickly through a complex environment.
The easy path was to debate opinions. The correct path was to collect consistent traces.

They kept a standard performance capture checklist: same route, same camera sweeps, same graphics preset, same machine state.
They captured frame times, CPU/GPU utilization, and IO stats. Then they compared percentiles, not averages.
The regression showed up as a worse 99.9th percentile frame time, even though average FPS was unchanged.

The culprit was mundane: a logging change that flushed to disk more frequently, and on the main thread.
On fast machines it was “fine,” until the filesystem hit a periodic sync or the disk warmed up and latency spiked.
The logging wasn’t “heavy” on average; it was occasionally blocking.

They moved logging off the main thread and batch-flushed safely. The release shipped on time.
Nobody wrote a heroic Slack message, which is how you know it was done right.

Common mistakes: symptom → root cause → fix

1) “High FPS but it stutters when I turn fast”

Symptom: Spikes during camera pans, entering new areas, or first-time effects.

Root cause: Asset streaming stalls or shader compilation on demand.

Fix: Enable shader pre-compilation if available, warm the shader cache, move game to fast storage, avoid background IO, keep RAM headroom.

2) “It’s smooth for 10 minutes, then gets worse”

Symptom: Frame time spikes increase over time.

Root cause: Thermal throttling (GPU/CPU/NVMe) or memory leak leading to paging.

Fix: Monitor clocks/temps, improve cooling, cap FPS, check RAM and swap activity, restart as containment if needed.

3) “Periodic hitch every second (or every few seconds)”

Symptom: Frame time spikes at a regular cadence.

Root cause: Background tasks on a schedule (indexers, updaters), telemetry flush, or sync/buffering mismatch with the refresh cadence.

Fix: Disable scheduled tasks, test fullscreen exclusive, use a sane FPS cap, and avoid third-party frame limiters that cause pacing jitter.

4) “Turning on VSync makes it laggy; turning it off tears”

Symptom: Either tearing or sticky input.

Root cause: Classic VSync behavior: missed deadlines force waiting a full refresh; buffering adds latency.

Fix: Use VRR if available; cap FPS slightly below refresh; if stuck on VSync, tune settings to avoid missing refresh deadlines.

5) “Lowering graphics doesn’t help stutter”

Symptom: Same spikes at low settings.

Root cause: CPU-bound main thread, IO stalls, or OS-level interruptions.

Fix: Reduce CPU-heavy settings, check background processes, verify memory pressure, and stop blaming the GPU by default.

6) “Microstutter only in borderless window”

Symptom: Fullscreen feels smoother than borderless.

Root cause: Compositor path adds scheduling and buffering complexity; overlays hook differently.

Fix: Try fullscreen exclusive, disable overlays, test different compositor settings or session type.

7) “Stutter after a driver update”

Symptom: New spikes or worse pacing after update.

Root cause: Driver regression, shader cache invalidation, or changed power management defaults.

Fix: Clean install driver, rebuild shader cache, verify power/clock behavior, and roll back if evidence supports it.

Joke #2: If your fix involves “try reinstalling Windows,” that’s not troubleshooting; that’s performance exorcism with a progress bar.

Checklists / step-by-step plan

Checklist A: stabilize frame pacing in 15 minutes

  1. Measure: enable a frame time graph and reproduce the hitch reliably.
  2. Cap FPS: use the in-game limiter first; set cap to refresh-3 (e.g., 141 for 144 Hz).
  3. VRR: enable G-Sync/FreeSync; keep the cap so you stay inside the VRR window.
  4. Overlays off: disable capture/overlay tools one by one; retest after each change.
  5. Thermals: watch GPU/CPU clocks and temps; fix throttling before chasing settings.
  6. Memory headroom: ensure you have available RAM; close browsers and launchers that balloon over time.
  7. IO sanity: ensure the game is on fast local storage; exclude from real-time scanning if policy allows.

Checklist B: decide “CPU-bound vs GPU-bound” without guessing

  1. Lower resolution by one step (or enable an upscaler).
  2. If frame times improve significantly: more GPU-bound.
  3. If frame times barely change: more CPU-bound or sync/IO bound.
  4. Lower CPU-heavy settings (view distance/crowds/physics).
  5. If frame times improve: CPU-bound confirmed.

Checklist C: evidence package for escalation (driver/vendor/engine team)

  • Frame time capture showing spikes (include percentile stats if available).
  • GPU clocks/temps/util at the time of spikes.
  • CPU per-core utilization and frequency scaling state.
  • IO stats (disk queue/await) and memory stats (swap/page faults).
  • Exact reproduction steps (scene, route, settings, time-to-failure).

A reliability note you should steal for your own systems

“Average latency is comforting. Tail latency is the customer experience.” That’s the same lesson as frame time.
In ops, we track p95/p99. In rendering, we track 1% and 0.1% lows and the frame time spikes.

One quote, because it’s relevant and worth remembering:
Everything fails all the time. — Werner Vogels

FAQ

1) If I have a 240 Hz monitor, do I need 240 FPS?

No. You need consistent delivery. 120 FPS with flat ~8.3 ms frame times can feel better than unstable 200–240.
Higher refresh helps reduce perceived latency, but only if your system can hold stable frame times.

2) What’s the difference between “stutter” and “input lag”?

Stutter is uneven frame times (visual hitching). Input lag is the time from your action to photons on the display.
They correlate but aren’t identical: you can have smooth pacing with high latency (deep queues),
or low latency with occasional spikes (background stalls).

3) Are 1% lows the same as frame time spikes?

They’re related. 1% lows summarize the slowest 1% of frames into a single number.
Frame time spikes show the exact shape and timing. Use lows for quick comparisons; use graphs for diagnosis.

4) Why does an FPS cap sometimes make the game feel smoother?

Because it reduces variance and prevents runaway queueing and thermal oscillation.
A cap forces the pipeline into a predictable cadence. Predictable cadence looks smooth.

5) Should I use in-game FPS limiter, driver limiter, or an external tool?

Prefer in-game first. It’s closest to the engine’s timing model and usually produces better pacing.
Driver-level caps can be fine. External tools vary widely; some are excellent, some introduce jitter. Measure, don’t assume.

6) Does faster RAM help frame time?

Sometimes, especially in CPU-bound scenarios where the main thread is memory-latency sensitive.
But it’s rarely a silver bullet. If you’re paging to disk, faster RAM won’t save you. Fix memory pressure first.

7) Why do I stutter only the first time I enter an area?

That’s often shader compilation or asset cache warm-up. Once compiled/cached, repeats are smoother.
Games that precompile shaders at startup tend to hitch less during gameplay, at the cost of longer load screens.

8) Can storage really cause in-game stutter even on NVMe?

Yes. NVMe improves averages, not architecture. A single synchronous IO on a critical thread can block the frame.
Also: thermal throttling or firmware issues can spike IO latency. Check SMART and temps if the pattern fits.

9) Is VRR (G-Sync/FreeSync) always better?

Usually, for variable frame rates. But VRR doesn’t fix huge spikes; it just adapts refresh timing to frame delivery.
You still need to eliminate stalls. Also, misconfigured VRR plus bad caps can cause flicker or odd pacing.

10) What frame time is “good”?

Match your target. For 60 Hz, you want most frames under 16.67 ms, with minimal spikes.
For 144 Hz, aim for ~6.94 ms typical and keep the tail tight—spikes over ~20 ms will be noticeable in fast motion.

Conclusion: next steps that actually move the needle

Stop arguing with an FPS counter. Measure frame time.
The goal isn’t a heroic peak number; it’s boring consistency.
Boring is smooth. Smooth is what you wanted.

Do this next, in order

  1. Turn on a frame time graph and reproduce the issue.
  2. Cap FPS slightly below refresh and test again.
  3. Classify the bottleneck: CPU, GPU, IO/memory, or sync/pacing.
  4. Remove background offenders (overlays, indexers, updaters) and check thermals.
  5. If spikes remain, collect an evidence package (percentiles, clocks, temps, IO, memory) and escalate with data.

That’s the entire frame time religion. Practice it for a week and you’ll start hearing stutter problems in meetings
the same way you hear tail latency in production: as a solvable dependency issue, not as “my machine hates me.”

← Previous
Arc drivers: how a GPU generation gets fixed in public
Next →
Local DNS for VPN Users: Stop DNS Leaks and Split Routing Failures

Leave a comment