Everything is fine until you ship “just a small animation” and your page starts moving like it’s running through molasses. The worst part: it may look smooth on your laptop, then turn into a slideshow on a mid-range phone that your customers actually own.
Performance-friendly CSS animation isn’t mystical. It’s a pipeline problem. If you understand what triggers layout, paint, and compositing—and you verify it with the right tools—you can ship motion that feels expensive without actually being expensive.
The rendering pipeline you’re actually paying for
Browsers don’t “animate CSS.” They update a scene graph under tight deadlines: ~16.7ms per frame for 60Hz, ~8.3ms for 120Hz. Miss the deadline and the user sees stutter. And users are ruthless judges: they’ll blame your product, not their device.
For performance work, reduce everything to three costs:
- Layout (a.k.a. reflow): compute element sizes and positions. Expensive because changes can cascade.
- Paint: rasterize pixels (text, borders, shadows, images) into bitmaps. Expensive because pixels are work.
- Composite: assemble painted layers into the final frame, applying transforms, opacity, clipping, etc. Often cheaper and can run on the compositor thread.
When people say “use transform and opacity,” they’re pointing at a pragmatic truth: those properties can often be animated in the compositing step without re-running layout or paint every frame.
What “compositor-only” really buys you
If the browser can keep an element on its own layer (or can otherwise treat it as a separate composited surface), changing transform or opacity becomes a matrix multiply and an alpha blend. That’s not free, but it is predictable. Predictable is what keeps your frame budget intact when the rest of the page is busy doing… everything else.
But “compositor-only” is a conditional promise, not a law of physics. You can still trigger paints (or worse, layouts) if the element isn’t isolated, if it intersects with effects that require repainting, or if you ask the browser to do something that can’t be deferred to compositing.
The SRE framing: performance is an SLO
In production ops we don’t accept “usually fine” for latency. Motion performance deserves the same discipline. Treat jank like tail latency: a couple of bad frames in the 99th percentile can dominate how the UI feels. You need:
- budget awareness (16.7ms is your request timeout)
- profiling (your traces)
- regression detection (your alerting)
- guardrails (your rate limits)
And yes, you can absolutely regress animation performance without touching the animation. You add a shadow. You change a font. You ship a new sticky header. Congratulations, your composited animation is now stuck waiting behind paint work.
The transform/opacity rule (and what it really means)
The rule is simple: animate transform and opacity when you care about smoothness. The reason is less simple: those properties can be applied at composite time using previously painted textures, avoiding per-frame layout and paint.
Good animations: changing how something is drawn, not what it is
Use transform for movement and scaling, and opacity for fades. Instead of animating top or left, animate transform: translate(). Instead of animating width, animate transform: scaleX() on a pseudo-element or inner wrapper.
Bad animations: changing geometry, flow, or paint-heavy effects
Avoid animating properties that force layout:
width,heighttop,left,right,bottom(when they affect layout)margin,paddingfont-size,line-height(especially painful for text)
Avoid animating properties that force expensive paints:
box-shadow(large blur radius is a paint tax)filter(sometimes composited, sometimes brutal; depends on browser and context)background-position(can be paint-heavy)border-radius(often triggers repaints; can be surprisingly expensive at scale)
“But I need to animate height” — the grown-up alternatives
Sometimes you truly need an expanding panel. You still have options that don’t set your CPU on fire:
- Use transforms on an inner wrapper: keep layout stable and animate a clipped inner element with
transform: scaleY(). Pair withtransform-origin: top. - Use
max-heightonly for small ranges: it still triggers layout, but the damage may be contained if the subtree is isolated and small. - Use discrete states + reduced motion: sometimes the correct performance fix is fewer frames.
- Use Web Animations API for orchestration, but keep the same property choices. The API doesn’t magically make layout cheaper.
Pick the right easing and duration (yes, it matters)
Too-short animations look like glitches; too-long animations feel like UI lag. A good default is 150–250ms for micro-interactions and 250–400ms for larger transitions. If you’re animating position over a long distance, add a little extra time or it will feel like teleporting.
Also: don’t stack three easings that fight each other. If your element is scaling, moving, and fading, keep the curve consistent unless you have a reason not to.
Joke #1: Animating height in a large DOM is like “just restarting the database” during peak traffic: sometimes it works, and you don’t deserve the win.
One quote, because it’s true
Paraphrased idea (Werner Vogels): “Everything fails, all the time.” Plan for it—especially performance regressions.
Facts and historical context worth knowing
These aren’t trivia for trivia’s sake. Each one explains why the “transform/opacity” advice exists, and why it sometimes fails.
- Early CSS animations were painted like any other style change. The push toward compositor-driven animation accelerated as browsers adopted multi-threaded rendering and more aggressive layerization.
- Mobile browsers forced the issue. Desktop CPUs could brute-force a lot of bad animation. Mobile thermal limits made jank unavoidable unless work moved off the main thread.
- High-refresh-rate displays changed the bar. 120Hz makes mediocre animation look worse because you have half the time per frame.
- “GPU acceleration” is not a single switch. Compositing may use GPU; rasterization might still be CPU; and some effects force software fallbacks depending on drivers and memory pressure.
- Layer creation has a cost. Promoting too many elements to layers can increase memory usage and compositing overhead. The “fix” becomes a new problem.
- Font rendering is a frequent hidden tax. Animations that cause text to repaint—especially with subpixel AA changes—can tank performance and look visually unstable.
- Sticky and fixed elements complicate scrolling pipelines. Many browsers optimize scrolling by keeping it off the main thread; certain effects (like heavy backdrops) can pull it back.
- Containment primitives exist because layout is contagious. CSS
containandcontent-visibilitywere introduced to reduce the blast radius of layout/paint work in complex pages. - “Reduce motion” became a real platform feature for a reason.
prefers-reduced-motionisn’t just accessibility; it’s also a performance escape hatch on slow devices.
Pitfalls: when “GPU-accelerated” becomes “GPU-annoyed”
1) Transform/opacity are fast… until you force paint anyway
You can animate transform on an element that still repaints because:
- the element isn’t on its own layer and the browser decides repainting is cheaper than isolating it
- it has paint-heavy descendants that change (e.g., animated gradients)
- you combine it with effects that require repaint (certain filters, blend modes, large shadows)
Rule of thumb: if the pixels inside the element are stable, compositing wins. If the pixels are changing, you’re paying paint costs and transform doesn’t save you.
2) will-change is a power tool, not a lifestyle
will-change: transform tells the browser, “I’m about to animate this; please prepare.” Preparation often means promoting to a layer and allocating memory. That’s helpful for a small number of elements that you know will animate soon.
It’s harmful when you sprinkle it everywhere “just in case.” The failure modes:
- increased GPU memory usage (more layers, bigger textures)
- more compositing work (more surfaces to blend)
- more cache pressure (textures get evicted and repainted later)
- worse performance on low-end devices (exactly where you needed help)
Use will-change like you use pre-warming a cache: close to the event, scoped tightly, removed when not needed.
3) Subpixel wobble and the “why is it blurry?” complaint
Transforms happen in floating point space. Text and thin lines can end up on half-pixels, triggering anti-aliasing differences frame-to-frame. You’ll see it as shimmer or blur during movement.
Fixes include:
- translate in whole pixels when possible (round values in JS if driving transforms)
- avoid animating text layers directly; animate containers with stable rasterization
- consider
translateZ(0)carefully (it can change rasterization behavior)
4) Big layers are expensive layers
If you promote a full-screen element (or a near full-screen list) to its own layer and animate it, you may allocate massive textures. That can:
- increase memory usage
- cause tiling behavior and partial repaints
- trigger GPU memory eviction, which causes stutter at the worst time
One of the most common “why did this get worse?” moments is promoting a huge scrolling container because it seemed like a good idea.
5) Animations that fight scrolling
Scroll is sacred. Browsers work very hard to keep scrolling smooth, sometimes by running it off the main thread. If your animation forces main-thread work during scroll—layout, heavy paint, or synchronous JS—scroll jank shows up immediately.
Be especially wary of:
- scroll-driven JS that reads layout and writes styles in the same tick
- sticky headers with heavy shadows/backdrops
- large
backdrop-filterareas (often expensive)
Joke #2: Your scroll handler doesn’t need to be “real-time.” It’s a UI, not a high-frequency trading desk.
Fast diagnosis playbook
This is the “page is janky, what do I do in the next 10 minutes?” checklist. It prioritizes the most common bottlenecks and the fastest disambiguation steps.
First: confirm what kind of jank it is
- Is it during scroll? If yes, suspect main-thread work (layout/paint/JS) blocking scroll, or expensive compositing.
- Is it during a specific animation? If yes, suspect layout thrash, paint-heavy effects, too many layers, or large textures.
- Is it only on some devices? If yes, suspect GPU memory limits, driver differences, high DPR devices, or thermal throttling.
Second: measure before you “optimize”
- Record a trace in DevTools Performance with the animation running.
- Check if frames are dropped due to Main (JS/layout) or Raster/Paint or Compositor.
- Turn on paint flashing / layer borders to see what’s repainting and what’s being composited.
Third: apply the minimal fix that removes the bottleneck
- If layout dominates: stop animating layout properties; add containment; remove forced synchronous layout in JS.
- If paint dominates: reduce paint area; remove expensive shadows/filters; isolate animated element; pre-render static assets.
- If compositing dominates: reduce number of layers; avoid huge promoted surfaces; remove unnecessary
will-change.
Fourth: validate on a realistic device profile
Throttle CPU in DevTools, test at high DPR, and test with reduced motion. If the fix only works on your dev machine, it’s not a fix; it’s a demo.
Hands-on tasks: commands, outputs, decisions
You asked for practical tasks with commands, outputs, and the decision you make from it. Here are 12+ that map to real workflows: local debugging, CI checks, and “why does it only break in prod?” investigations.
Task 1: Audit which CSS properties are being animated in your codebase
cr0x@server:~$ rg -n "transition-property|transition:|@keyframes|animation:" ./src
src/components/Card.css:14:transition: box-shadow 200ms ease, transform 200ms ease;
src/components/Drawer.css:22:transition: height 250ms ease;
src/styles/animations.css:5:@keyframes pulse { from { opacity: 0.6 } to { opacity: 1 } }
What it means: You’ve got at least one height transition and a box-shadow transition—classic layout/paint risks.
Decision: Replace height animation with a transform-based technique or contain it to a small subtree. Consider dropping shadow animation or faking it with opacity on a pre-rendered shadow element.
Task 2: Find “will-change” usage and check if it’s being abused
cr0x@server:~$ rg -n "will-change" ./src
src/styles/global.css:88:.card { will-change: transform; }
src/components/Modal.css:12:.modal { will-change: transform, opacity; }
src/components/List.css:3:.row { will-change: transform; }
What it means: Lots of elements are pre-promoted, possibly including list rows (potentially hundreds).
Decision: Remove will-change from anything repeated in large lists. Apply it only shortly before interaction (hover/focus/open) and remove after.
Task 3: Verify if you’re shipping reduced-motion support
cr0x@server:~$ rg -n "prefers-reduced-motion" ./src
src/styles/animations.css:42:@media (prefers-reduced-motion: reduce) { .anim { animation: none !important; transition: none !important; } }
What it means: There is at least a baseline kill switch.
Decision: Ensure it covers the heavy interactions (drawers, carousels) and doesn’t just disable decorative animations while keeping expensive layout transitions.
Task 4: Catch layout-thrashing patterns in JS (read/write interleaving)
cr0x@server:~$ rg -n "offsetHeight|getBoundingClientRect\\(|scrollTop|clientWidth" ./src
src/pages/Home.ts:91:const h = panel.offsetHeight;
src/pages/Home.ts:92:panel.style.height = (h + 20) + "px";
src/hooks/useSticky.ts:44:const r = el.getBoundingClientRect();
What it means: There’s at least one read-after-write candidate that can force synchronous layout each frame if used in a loop or scroll handler.
Decision: Batch reads and writes (read all layout first, then write styles), or move to a transform-based approach that avoids reading layout in the hot path.
Task 5: Produce a Lighthouse JSON artifact in CI (baseline performance budget)
cr0x@server:~$ lighthouse http://localhost:4173 --output=json --output-path=./artifacts/lh.json --quiet
...Auditing: Performance...
...Report is done...
...Saved JSON report to ./artifacts/lh.json...
What it means: You have a reproducible artifact to diff across commits. Lighthouse won’t “grade your animation,” but it will catch main-thread bloat and large paint costs that correlate with jank.
Decision: Add thresholds (e.g., total blocking time, main-thread work) as guardrails; when they regress, animation smoothness tends to regress too.
Task 6: Extract long tasks from the Lighthouse artifact (spot main-thread starvation)
cr0x@server:~$ jq '.audits["long-tasks"].details.items[:5]' ./artifacts/lh.json
[
{
"startTime": 1234.56,
"duration": 245.12,
"url": "http://localhost:4173/assets/app.js",
"attributableToMainThread": true
}
]
What it means: Long tasks over ~50ms are frame killers; they block input and animations.
Decision: Split work (code splitting), defer non-critical JS, and avoid doing heavy computations during transitions/scroll.
Task 7: Capture a CPU profile while reproducing the jank (Node tooling for dev servers)
cr0x@server:~$ node --cpu-prof --cpu-prof-dir=./profiles ./node_modules/.bin/vite dev
VITE v5.0.0 ready in 312 ms
➜ Local: http://localhost:5173/
...CPU profile written to ./profiles/CPU.2025-12-29T10-22-11.123Z.cpuprofile...
What it means: If your dev server is pegging CPU (hot reload storms, heavy transforms in build steps), you may be mistaking tooling jank for app jank.
Decision: If the dev server is the bottleneck, test in a production build. Don’t optimize CSS based on a dev-mode artifact.
Task 8: Run a production build and serve it locally (remove dev-mode noise)
cr0x@server:~$ npm run build
> build
...dist/assets/index-abc123.js 312.45 kB...
cr0x@server:~$ npx serve -s dist -l 4173
Serving!
Local: http://localhost:4173
What it means: You now test what users get: minified JS, optimized CSS, real bundling behavior.
Decision: Re-check animation jank in production mode before making changes. If the problem vanishes, it was tooling or source maps, not CSS.
Task 9: Use Playwright to produce a deterministic trace during an animation
cr0x@server:~$ npx playwright test --trace on --project chromium
Running 1 test using 1 worker
✓ ui-animations.spec.ts:12:1 drawer open should be smooth (4.2s)
Trace file: test-results/ui-animations-drawer/trace.zip
What it means: You can replay what happened and correlate it with JS activity and timing. This is the closest thing to “perf regression tests” that won’t ruin your life.
Decision: If a commit changes traces significantly (more long tasks during animation), block the merge or fix the regression before it hits users.
Task 10: Check for giant images that turn fades into bandwidth-to-paint pipelines
cr0x@server:~$ find ./dist -type f -name "*.png" -o -name "*.jpg" -o -name "*.webp" | xargs -I{} sh -c 'printf "%8s %s\n" "$(stat -c%s "{}")" "{}"' | sort -nr | head
5242880 ./dist/assets/hero-background.jpg
1310720 ./dist/assets/product-shot.png
786432 ./dist/assets/logo.png
What it means: Large assets increase decode and raster costs. Fading a 5MB hero background can still stutter if the image decode/raster hits during the transition.
Decision: Resize/compress assets; preload critical images; avoid animating huge newly-decoded surfaces into view.
Task 11: Inspect layer count proxies by finding mass-applied transforms
cr0x@server:~$ rg -n "transform:" ./src | head -n 10
src/components/Row.css:7:transform: translateZ(0);
src/components/Card.css:22:transform: translateY(-2px);
src/components/Toast.css:18:transform: translateX(0);
src/components/Toast.css:22:transform: translateX(120%);
What it means: translateZ(0) is often used as a “force layer” hack. Used broadly, it can balloon layers and memory.
Decision: Delete blanket translateZ(0). Add layer promotion only where profiling shows a win.
Task 12: Spot expensive CSS effects that often repaint (shadows, filters, backdrops)
cr0x@server:~$ rg -n "box-shadow:|filter:|backdrop-filter:" ./src
src/styles/global.css:55:box-shadow: 0 20px 60px rgba(0,0,0,0.35);
src/components/Header.css:19:backdrop-filter: blur(12px);
src/components/Avatar.css:9:filter: drop-shadow(0 6px 10px rgba(0,0,0,0.25));
What it means: These are frequent paint offenders, especially when combined with animation or scroll.
Decision: Reduce blur radius, reduce affected area, or replace with pre-rendered assets. For backdrops: scope blur to small regions or provide a non-blurred fallback for low-end devices.
Task 13: Quickly detect if you’re animating layout via shorthand transitions
cr0x@server:~$ rg -n "transition:\s*all" ./src
src/components/Button.css:4:transition: all 200ms ease;
src/components/Panel.css:11:transition: all 300ms ease-in-out;
What it means: transition: all is a foot-gun. It can start animating layout-affecting properties accidentally when someone changes CSS later.
Decision: Replace with explicit properties (e.g., transition: transform 200ms ease, opacity 200ms ease) and enforce it via linting.
Task 14: Add a quick stylelint rule to prevent “transition: all” regressions
cr0x@server:~$ cat .stylelintrc.json
{
"rules": {
"declaration-property-value-disallowed-list": {
"transition": ["/all\\s/"]
}
}
}
What it means: This blocks the most common accidental performance regression in CSS transitions.
Decision: Put it in CI, fail the build on violations, and force explicit transitions so performance characteristics are stable over time.
Three corporate mini-stories from the trenches
Mini-story 1: The incident caused by a wrong assumption
A product team shipped a redesigned dashboard with a fancy “cards float in” animation. The implementation was disciplined: only transform and opacity. Everyone patted themselves on the back, because they had memorized the rule.
Within days, customer support started getting tickets: “scroll freezes,” “buttons don’t respond,” “page lags.” It didn’t reproduce on the flagship devices. It did reproduce on older phones and some corporate Windows laptops with conservative GPU drivers.
The wrong assumption was simple: transform/opacity always means compositor-only. In this case, each card contained a subtle animated gradient shimmer (a “skeleton loading” effect) implemented as a background animation. Those backgrounds repainted. So the “cheap” transform on 30 cards just became “paint 30 cards plus composite them” every frame.
The fix wasn’t heroic. They removed the shimmer once data arrived, reduced the number of simultaneously animating cards, and constrained painting with a wrapper that used containment. They also added a reduced-motion fallback that disabled the shimmer entirely. The dashboard stopped stuttering, and the team quietly removed “GPU-accelerated” from their internal slide deck.
Mini-story 2: The optimization that backfired
Another org had a modal system used across the entire app. Someone noticed the modal open animation occasionally dropped frames, so they “optimized” it by adding will-change: transform, opacity to the modal and overlay, plus translateZ(0) to a handful of components “to make sure it stays on the GPU.”
It worked in the isolated case. It also introduced a slow leak of performance elsewhere. Pages with long lists became noticeably worse after a few interactions. The GPU process memory grew and never seemed to come down quickly. Users described it as “it gets laggier the longer I use it,” which is a hauntingly accurate symptom.
The root problem: layer promotion everywhere. List rows with will-change became their own surfaces. Overlay and modal stayed promoted even when not animating. The compositor had more work and more memory pressure. Under pressure, textures got evicted, then re-rasterized later—causing jank during completely unrelated interactions.
The rollback was straightforward: remove translateZ(0) hacks, scope will-change only to the short window right before animation, and explicitly clear it after the animation ends. The performance win returned, and the slow-degradation complaint vanished. The lesson stuck: will-change is not a performance seasoning you sprinkle on everything.
Mini-story 3: The boring but correct practice that saved the day
A platform team maintained a design system used by dozens of feature teams. They’d been burned by animation regressions before, so they did something profoundly unsexy: they codified animation primitives and banned risky transitions by default.
Buttons, cards, toasts, and drawers all used shared mixins: transforms for motion, opacity for fades. No transition: all. No animating layout properties in common components. If a component truly needed a height animation, it had to be isolated behind a wrapper pattern and documented.
Then a major rebrand landed: new typography, heavier shadows, more blur. The app should have become janky. Instead, the damage was contained because the core animations didn’t depend on paint-heavy properties. When some screens did regress, the traces were readable: you could point to paint time spikes from new visual effects, rather than chasing phantom “CSS animation bugs.”
They shipped on time, and the incident queue stayed quiet. Nobody wrote a celebratory post about it because the only visible change was that nothing caught fire. That’s the job.
Common mistakes: symptoms → root cause → fix
1) Symptom: animation stutters only when it starts
Root cause: layer promotion and rasterization happen late (first frame of animation), or an image/font decodes at the same time.
Fix: pre-promote briefly with will-change right before animation; preload critical images; avoid first-use font swaps during the animation.
2) Symptom: hover effects feel “heavy” and lag input
Root cause: hover triggers paint-heavy properties (shadow blur) across many elements in a grid; the repaint region is large.
Fix: animate transform/opacity instead; fake shadows via opacity on a pseudo-element; reduce blur radius; reduce number of simultaneously hovered/repainted items.
3) Symptom: expanding/collapsing panels drop frames badly
Root cause: animating height/max-height causes layout recalculation for a large subtree each frame.
Fix: keep layout stable and animate an inner wrapper using transform: scaleY() with clipping; add contain: layout paint where safe; avoid reading layout each tick.
4) Symptom: scroll jank appears after adding a sticky header
Root cause: sticky element with expensive paint (shadow/backdrop blur) forces main-thread work during scroll; scroll can’t stay fully async.
Fix: simplify sticky visuals; reduce blur/backdrop area; consider a solid-color header on low-end devices; verify with paint flashing.
5) Symptom: text looks blurry during animation
Root cause: subpixel positioning during transforms; rasterization changes as the element moves.
Fix: animate container rather than text; round translate values; avoid scaling text; test across browsers (rasterization heuristics differ).
6) Symptom: performance degrades over time, not immediately
Root cause: too many promoted layers (often via will-change or translateZ(0)) causing memory pressure and texture churn.
Fix: remove persistent promotions; keep layer count low; scope will-change to active animations only.
7) Symptom: “transform animation is slow” on a big element
Root cause: the element is huge; compositing it every frame is expensive; texture uploads or tiling may occur.
Fix: reduce layer size (animate a smaller child); avoid full-screen promoted surfaces; simplify visuals; prefer opacity-only fades for large regions.
8) Symptom: animation breaks when content changes during transition
Root cause: mixing layout changes with transform animation; content loads trigger reflow mid-animation.
Fix: lock sizes during animation; avoid inserting DOM nodes mid-flight; animate placeholders, then swap content after.
Checklists / step-by-step plan
Checklist A: designing an animation that stays fast
- Define the job: is this decorative or functional? If functional, prioritize responsiveness over flourish.
- Pick properties: default to
transform+opacity. Avoid layout properties unless the animated subtree is tiny. - Keep paint stable: avoid animating blur-heavy shadows, filters, and large gradients.
- Limit scope: animate one container, not dozens of children, unless you’ve profiled it.
- Choose durations: 150–250ms for small interactions; 250–400ms for bigger transitions. Shorter isn’t always faster.
- Plan reduced motion: disable or simplify the effect using
prefers-reduced-motion.
Checklist B: shipping without regressions
- Ban
transition: allin shared components. Explicit transitions keep performance stable. - Lint for risky patterns: mass
will-change, blankettranslateZ(0), layout reads in scroll handlers. - Profile on a slow profile: throttle CPU and test on mid-tier hardware.
- Record before/after traces: keep them in the PR to avoid “works on my machine” debates.
- Measure long tasks: animation performance is often a JS scheduling problem in disguise.
Checklist C: fixing an existing janky animation
- Identify the expensive stage: main thread vs paint vs compositing.
- If main thread is hot: eliminate layout animations, batch DOM reads/writes, cut long tasks.
- If paint is hot: simplify visuals, reduce repaint area, remove blur-heavy effects, isolate with containment.
- If compositor is hot: reduce layer count and layer size, remove unnecessary promotions.
- Validate again: same reproduction, same device profile, same measurement approach.
FAQ
1) Are transform and opacity always “free” to animate?
No. They’re often cheaper because they can be applied at composite time, but you still pay compositing cost, and you might still trigger paint/layout depending on context.
2) Should I use will-change everywhere to force smooth animations?
No. Use it sparingly and temporarily. Overuse increases memory pressure and compositing overhead, which can make performance worse—especially on low-end devices.
3) Is translateZ(0) still a good trick?
As a targeted workaround: sometimes. As a default: no. It’s a blunt instrument that can create too many layers and cause long-term performance degradation.
4) Why does animating box-shadow feel so bad?
Because large blurred shadows are paint-heavy. If you animate them, you often repaint a big area every frame. Fake it with opacity on a pseudo-element, reduce blur, or avoid animating it.
5) What about animating filter or backdrop-filter?
Sometimes it’s composited, sometimes it’s expensive, and the cost varies by browser and device. Treat filters as “profile first.” For backdrops, reduce the affected area aggressively.
6) If I only animate transforms, why do I still see jank?
Common causes: main-thread JS long tasks (blocking the compositor’s ability to submit frames), late layer promotion, raster cache misses, huge layers, or other parts of the page repainting during the animation.
7) Is CSS animation always better than JavaScript animation?
No. CSS is great for simple, declarative motion. JS (or Web Animations API) is better for orchestration, interruption, and sequencing. But property choice matters more than API choice.
8) How do I animate an expanding drawer without animating height?
Use an inner wrapper that you scale in Y with transform: scaleY() and clip overflow. Keep the outer layout stable so the rest of the page doesn’t reflow each frame.
9) Why does it look smooth on desktop but not on mobile?
Mobile devices have tighter CPU and GPU budgets, higher device pixel ratios, and thermal throttling. Your “small” paint area can become huge in real pixels, and memory pressure arrives faster.
10) Should I disable animations for everyone if performance is bad?
Don’t punish all users for a subset of devices. Provide prefers-reduced-motion, simplify the heaviest effects, and fix the root cause. Turn off the truly decorative stuff if it’s not earning its keep.
Conclusion: next steps you can actually do
If you want smooth CSS animations in production, stop treating “transform and opacity” as a superstition and start treating it as a hypothesis you verify.
- Audit: remove
transition: all, find layout-animated properties, and delete blanket layer hacks. - Profile: record traces and identify whether layout, paint, or compositing is the bottleneck.
- Fix minimally: switch to transform/opacity, contain layout where safe, and reduce paint-heavy effects.
- Guardrail: add lint rules and CI artifacts so regressions don’t quietly ship on a Friday.
- Validate on reality: slow device profiles, production builds, and reduced motion support.
Your users don’t care that your animation is “technically GPU accelerated.” They care that the UI responds immediately and scrolls like it means it. Build for that.