You shipped a list. It worked in staging. Then real users arrived: someone lost their place after tapping “Back,” analytics cratered, support tickets mentioned “it keeps loading forever,” and SRE is asking why one endpoint now accounts for half the read IOPS.
Pagination and infinite scroll are not “UX choices.” They’re distributed systems choices with a UI hat on. Pick the wrong one and you’ll punish users, your database, your cache, and your on-call rotation—often all at once.
Make the decision like an operator, not a designer
Pagination and infinite scroll optimize different things. If you choose based on taste, you’ll accidentally optimize for the loudest person in the room. Choose based on user intent, error recovery, and operational cost.
User intent decides the default
- Users searching for something specific (products, tickets, accounts, docs): default to pagination. People want landmarks: page numbers, stable URLs, “Back” that behaves, and a sense of progress.
- Users browsing to be entertained (feeds, inspiration galleries, social timelines): infinite scroll can work because the “next” is less important than the “now.”
- Users comparing items (shopping, dashboards): pagination, or a hybrid with “Load more” and persistent state. Comparisons require returning to a previous spot without losing context.
Operational realities that should influence UX
Infinite scroll is not “no pages.” It’s many tiny pages fetched in sequence, which means:
- More network requests over a session.
- More client memory pressure unless you virtualize.
- Harder caching if your API isn’t cursor-based.
- Harder analytics attribution unless you plan for it.
- Harder debugging because a single “scroll” can touch multiple services.
Pagination is not “solved.” It can still be slow, inaccurate, and expensive when implemented as “OFFSET 90000 LIMIT 50” against a hot table. When you see that query pattern, you can practically hear the database sigh.
One quote you can actually run a service by
Hope is not a strategy.
— General Gordon R. Sullivan
If your infinite scroll depends on “hope the user won’t reach 10,000 items,” you built a time bomb with a scrollbar.
A few facts and history that still matter
- Early web lists were paginated mostly because of bandwidth: dial-up made “load everything” impossible, so page boundaries were a performance hack before they were a UX pattern.
- Infinite scroll popularized in the late 2000s as feeds and social timelines optimized for engagement, not task completion.
- Search engines historically struggled with infinite scroll because content without stable URLs is hard to crawl and index; “pretty” can be invisible.
- Offset pagination has algorithmic baggage: deep offsets often require scanning/skipping rows, which turns “page 2000” into a slow database walk.
- Cursor-based pagination became common with large-scale APIs because it’s stable under concurrent inserts and deletes—your “next page” doesn’t reshuffle as much.
- Virtualized lists were an answer to DOM bloat: rendering thousands of nodes tanks FPS and battery; virtualization only renders what’s visible.
- “Back button” behavior is a historical UX contract: browsers trained people that back restores state; infinite scroll breaks that unless you manage history carefully.
- HTTP caching likes stable URLs: paginated URLs cache well; “give me more of the feed after cursor X” is cacheable too, but only if you design it to be.
Patterns that don’t annoy users
Pattern 1: Pagination for intent-driven lists
Use pagination when a user cares about position and returning: search results, admin tables, audit logs, reports, inventory. Give them:
- Stable URL parameters (query + page or cursor).
- Visible progress (page numbers or “1–50 of 12,430”).
- Controls that work with keyboard navigation and screen readers.
- A “jump to page” only if it’s truly needed (more on that later).
Pattern 2: Infinite scroll for passive browsing
Infinite scroll is fine when the user’s job is “keep looking.” But don’t ship the default TikTok clone mechanics into an enterprise audit log and call it modern.
Successful infinite scroll has a few traits:
- Strong “resume where I was” behavior after Back/forward/navigation.
- Explicit loading states (and a hard stop on errors).
- Virtualization, or your UI becomes a space heater.
- A way out: footer access, “Back to top,” “Jump to filters,” and an end-of-list state when applicable.
Joke #1: Infinite scroll is like a buffet: delightful until you realize you can’t find the exit and your phone is at 3%.
Pattern 3: “Load more” as a calmer infinite scroll
If you want the engagement of infinite scroll without the chaos, ship a Load more button. It’s explicit, debuggable, and friendlier to accessibility tools. It also stops accidental “scroll storms” when a trackpad decides to be enthusiastic.
Pattern 4: “Pagination with prefetch” for speed without losing structure
Pagination doesn’t have to feel slow. Prefetch the next page when the user is 70–80% down the current page, then swap instantly on click. Keep the page boundary for URLs and analytics, but remove the wait.
Pattern 5: “Anchored infinite scroll” for history correctness
This is the adult version of infinite scroll: as the user scrolls, you update the URL to reflect the current anchor (page number or cursor) and store scroll position in history state. Back returns them to the exact place. It’s extra work. It’s also how you avoid support tickets that start with “I lost it.”
Pagination done right (UI + API)
UI rules that prevent rage-clicking
- Always show where the user is: page number and result range. “Page 7” beats “more stuff down there.”
- Keep page size predictable: changing the number of items per page mid-session breaks mental mapping.
- Make “Back” work: store state in URL and restore filters/sort. If your app requires three clicks to return to where they were, you’ve built a maze.
- Don’t overdo page links: show a window (e.g., 1 … 6 7 8 … 200). Users don’t want a calendar view of your dataset.
- Allow “jump to page” only with guardrails: jumps to deep pages can be expensive and inconsistent unless your backend supports it efficiently.
API rules: offset vs cursor, and when each hurts
Offset pagination (page=7, size=50) is simple and stable for small datasets. It breaks down when:
- You have deep paging (page 500+).
- Rows are inserted/deleted frequently; “page 7” drifts and duplicates appear.
- Your DB query becomes an O(n) skip operation.
Cursor-based pagination (after=cursor) is better for large and changing datasets. It requires:
- A stable sort key (timestamp + tiebreaker ID, or a monotonic primary key).
- A cursor token encoding “last seen” position.
- Careful thought around filters and sorting so cursors remain valid.
Design your sort order like you mean it
Cursor pagination is only as good as the order you paginate by. “ORDER BY updated_at DESC” sounds reasonable until you remember that updates happen. Then items jump around, and cursors become unreliable. Prefer immutable order keys:
- Created time for feeds (if “newness” is the concept).
- Primary key order for admin lists (if “stability” matters more than semantics).
- Composite keys for uniqueness (created_at, id) to avoid duplicates on equal timestamps.
Make page boundaries cacheable
Pagination shines when you can cache. If your endpoint is “/search?q=…&page=3,” that’s a clean cache key. With cursors, cache keys can still work, but only if cursors are stable and not user-specific secrets. If they are per-user, expect lower cache hit rates and plan capacity accordingly.
Infinite scroll done right (without chaos)
Rule 1: Virtualize the list or pay in battery and bugs
If you append DOM nodes forever, you’ll eventually crash mobile Safari, or at minimum degrade scrolling to a slideshow. Virtualization means you render only visible items plus a buffer, keeping DOM size bounded.
Rule 2: Control request concurrency
Infinite scroll triggers fetches based on scroll position. Without concurrency limits you’ll:
- Send multiple overlapping requests for the same cursor.
- Race responses and reorder items.
- Hammer your backend when the user flings to the bottom.
Use a single in-flight request per feed segment. Cancel stale requests. Deduplicate items by ID.
Rule 3: Make error states terminal and recoverable
When a request fails, don’t spin forever. Show a “Retry” affordance. Log the cursor, filter state, and correlation ID so you can replay the problem server-side. “Something went wrong” with no context is an engineer’s version of a shrug.
Rule 4: Fix history semantics explicitly
“Back” should return you to the same item. That requires:
- Saving scroll position in history state (not just in memory).
- Updating URL as the anchor changes (page number or cursor).
- Restoring items from cache (client-side) or re-fetching quickly.
Rule 5: Provide a footer escape hatch
Infinite scroll often removes the footer, which removes navigation, support links, and legal text. Users will still need those. Give them a way to reach the bottom or provide a sticky footer/utility panel.
Joke #2: Debugging infinite scroll without logs is like herding cats—except the cats are HTTP requests and they know where you live.
Hybrid patterns that win in the real world
Hybrid A: Infinite scroll within a page boundary
You show “Page 1” with 50 items, but load them progressively as the user scrolls, and keep the URL and state as “page=1.” When the user reaches the end, they click “Next page.” This reduces initial load time and still gives structure.
Hybrid B: “Load more” with numbered pages in the URL
Each “Load more” increments an internal page counter and updates the URL to reflect the latest page loaded. Back works, analytics can attribute engagement to pages, and the user still gets an uninterrupted scroll.
Hybrid C: Two-tier navigation for “browse then refine”
Start with infinite scroll for discovery, but when the user applies filters or sorts, switch to pagination. Filtering changes intent: people stop wandering and start hunting. Your UI should notice that shift.
Performance and storage: what breaks first
The hidden storage tax of “just one more page”
From a storage engineer’s chair, infinite scroll tends to create long sessions with many small requests. That changes your I/O profile:
- More read amplification at the database if deep paging is offset-based.
- More cache churn at the CDN or reverse proxy if cursor tokens are uncacheable.
- More pressure on object storage if each card references multiple images and you lazy-load aggressively without cache headers.
Backend query design: the quiet killer
If your API uses “OFFSET … LIMIT …” on a large table, you will see latency climb roughly with offset depth. In production, that turns into tail latency spikes. Tail latency is the part users remember.
Cursor queries usually look like “WHERE (created_at, id) < (last_created_at, last_id) ORDER BY created_at DESC, id DESC LIMIT 50.” That scales better, uses indexes effectively, and behaves under concurrent writes.
Client-side performance: memory and main thread time
Without virtualization, the browser holds onto every rendered node, images, event handlers, and layout state. Memory grows. Garbage collection gets expensive. Scrolling janks. The CPU wakes up more often, which destroys mobile battery.
Rate limiting: your last line of defense
Infinite scroll bugs can trigger accidental traffic floods. Rate limit per user and per IP. But do it thoughtfully: rate limits should degrade gracefully (“slow down, retry”) not hard-fail into blank screens.
Observability: measure what users feel
If you can’t measure it, you’ll argue about it. Instrument both UI and backend with a shared request ID. Then measure:
- Time to first meaningful items: how fast does content appear?
- Scroll jank indicators: long tasks on main thread, frame drops.
- Request per session: infinite scroll often increases calls; validate whether it’s worth it.
- Error rate per cursor/page: failures might cluster at deep pages due to query inefficiency.
- Abandonment after load: do people bounce because loading is slow, or because content isn’t relevant?
- Back navigation success: measure how often returning restores the prior position.
One practical trick: log the deepest item index reached and the last stable anchor (page or cursor). It turns “users hate it” into “70% of sessions never load beyond 40 items, so stop prefetching 200.”
Three corporate mini-stories from the trenches
Mini-story 1: The incident caused by a wrong assumption
We inherited an internal admin console that displayed audit events. A product manager asked for “infinite scroll like modern apps” because paging felt old. The team complied. It shipped with offset pagination under the hood: each scroll fetch called the same endpoint with offset and limit.
The assumption was: “No one scrolls that far.” That was true for casual browsing. It was not true for incident response. During a security investigation, analysts scrolled back hours, then days. Offset queries got deeper and deeper, and latency climbed. The UI reacted by firing more requests because the scroll threshold kept being hit while the previous request was still pending.
The database did what databases do when asked to skip a mountain of rows repeatedly: it got hot. CPU spiked. Replication lag appeared. Suddenly, the admin console wasn’t the only thing impacted—other services sharing the same cluster started timing out. On-call had to rate limit the endpoint and disable infinite scroll behind a feature flag.
The fix was not “add more DB.” The fix was cursor pagination with an index that matched the sort order, plus a client concurrency gate (only one in-flight request). We also added a “Jump to time” filter, because investigators don’t want to scroll through Tuesday to reach Monday; they want a timestamp.
Mini-story 2: The optimization that backfired
A different team tried to make a feed feel instant by prefetching aggressively: on page load, fetch page 1, 2, 3, and 4 in parallel. They were proud. Their dashboards showed great median latency for “first page render,” because the first page returned quickly and the rest loaded quietly in the background.
Then mobile users started complaining about battery drain and data usage. Meanwhile, the caching layer got worse: the prefetch requests were personalized, the cursor tokens were user-specific, and the cache hit rate fell. The backend saw a jump in requests per session, even for users who bounced after five seconds.
The real failure mode was coordination. Prefetch didn’t respect user intent. It assumed every session was a deep scroll session. It also created burst traffic patterns: every page view kicked off multiple calls, increasing load during peak hours in a very synchronized way.
The fix was boring: prefetch only one page ahead, only after the user shows intent (scrolls past a threshold), and never in parallel with the initial render. We also added server hints: return has_more and a recommended prefetch_after_ms in the response for dynamic tuning. The feed felt the same. The infrastructure calmed down.
Mini-story 3: The boring but correct practice that saved the day
An e-commerce search team wanted infinite scroll to increase engagement. SRE was skeptical. The compromise was a staged rollout with guardrails: feature flag, canary users, strict error budgets, and a kill switch that could be flipped without a deploy.
They also did the unglamorous work: synthetic tests that scrolled to a fixed depth, captured waterfall traces, and validated that “Back” restored the prior scroll position. They kept pagination URLs even when using “Load more,” updating history state as the user progressed.
Two weeks after launch, a dependency change in the image resizing service increased response times. The new infinite scroll UI amplified it because users loaded more images per session. But because the team had metrics for “requests per session” and “time to next batch,” they saw the regression quickly and used the kill switch to revert to paginated navigation while the image service was fixed.
No drama. No war room. A boring plan did boring things, which is exactly what you want in production.
Fast diagnosis playbook
When users report “scroll feels broken” or “pagination is slow,” don’t start by bikeshedding UI. Find the bottleneck in three passes.
First: is it client-side jank or network/backend latency?
- Check browser performance traces (long tasks, layout thrash, memory growth).
- Check request waterfall: are calls slow, or just too many?
- Check if images dominate transfer time.
Second: is the API pagination model fighting the data model?
- Offset deep paging? Expect DB scans and tail latency.
- Cursor pagination but unstable ordering? Expect duplicates/missing items and angry users.
- Filters not included in cursor? Expect “next page” wrongness.
Third: is caching/rate limiting doing something unintended?
- Cache hit rate dropped after infinite scroll rollout? Cursor tokens likely uncacheable or too granular.
- 429s spiking? Frontend may be overfetching or retrying aggressively.
- CDN bytes up? Image lazy-loading may be triggering too many unique variants.
Practical tasks: commands, outputs, and decisions
These are the kinds of tasks you can run today to stop guessing. Each includes a realistic command, what the output means, and what decision you make from it.
Task 1: Confirm whether deep offset queries are happening
cr0x@server:~$ sudo grep -E "OFFSET [1-9][0-9]{4,}" /var/log/postgresql/postgresql-15-main.log | tail -n 3
2025-12-29 09:10:02 UTC LOG: duration: 812.433 ms statement: SELECT id, created_at FROM events ORDER BY created_at DESC OFFSET 50000 LIMIT 50;
2025-12-29 09:10:03 UTC LOG: duration: 944.120 ms statement: SELECT id, created_at FROM events ORDER BY created_at DESC OFFSET 60000 LIMIT 50;
2025-12-29 09:10:04 UTC LOG: duration: 1102.009 ms statement: SELECT id, created_at FROM events ORDER BY created_at DESC OFFSET 70000 LIMIT 50;
Meaning: You’re doing deep offset pagination; latency is climbing with offset.
Decision: Move to cursor-based pagination or add a time filter/jump mechanism; do not “optimize” by adding more application retries.
Task 2: Check DB index usage for the paginated query
cr0x@server:~$ psql -d appdb -c "EXPLAIN (ANALYZE, BUFFERS) SELECT id, created_at FROM events ORDER BY created_at DESC OFFSET 50000 LIMIT 50;"
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Limit (cost=28450.12..28450.25 rows=50 width=16) (actual time=801.122..801.140 rows=50 loops=1)
Buffers: shared hit=120 read=980
-> Gather Merge (cost=23650.00..28600.00 rows=120000 width=16) (actual time=620.440..796.300 rows=50050 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=120 read=980
-> Sort (cost=22650.00..22800.00 rows=60000 width=16) (actual time=580.110..590.230 rows=25025 loops=3)
Sort Key: created_at DESC
Sort Method: external merge Disk: 14560kB
-> Seq Scan on events (cost=0.00..12000.00 rows=60000 width=16) (actual time=0.220..220.300 rows=60000 loops=3)
Planning Time: 0.220 ms
Execution Time: 802.010 ms
Meaning: Sequential scan + sort + external merge: you’re paying for deep paging with disk work.
Decision: Add an index matching the sort and shift to cursor pagination; if you must keep offset for now, cap max page depth.
Task 3: Spot duplicate-item complaints by correlating cursor tokens
cr0x@server:~$ sudo grep "cursor=" /var/log/nginx/access.log | awk '{print $7}' | tail -n 5
/feed?limit=50&cursor=eyJsYXN0X2lkIjoxMjM0NTYsImxhc3RfY3JlYXRlZF9hdCI6IjIwMjUtMTItMjlUMDk6MDk6MDAuMDAwWiJ9
/feed?limit=50&cursor=eyJsYXN0X2lkIjoxMjM0NTYsImxhc3RfY3JlYXRlZF9hdCI6IjIwMjUtMTItMjlUMDk6MDk6MDAuMDAwWiJ9
/feed?limit=50&cursor=eyJsYXN0X2lkIjoxMjM0NTYsImxhc3RfY3JlYXRlZF9hdCI6IjIwMjUtMTItMjlUMDk6MDk6MDAuMDAwWiJ9
/feed?limit=50&cursor=eyJsYXN0X2lkIjoxMjM0NTYsImxhc3RfY3JlYXRlZF9hdCI6IjIwMjUtMTItMjlUMDk6MDk6MDAuMDAwWiJ9
/feed?limit=50&cursor=eyJsYXN0X2lkIjoxMjM0NTYsImxhc3RfY3JlYXRlZF9hdCI6IjIwMjUtMTItMjlUMDk6MDk6MDAuMDAwWiJ9
Meaning: Same cursor repeatedly: the client is refetching the same page (likely retry loop or concurrency bug).
Decision: Add client-side in-flight dedupe, backoff on retry, and server-side idempotency/deduping by request ID.
Task 4: Verify rate limiting and see if infinite scroll is tripping it
cr0x@server:~$ awk '$9==429 {count++} END{print "429s:", count}' /var/log/nginx/access.log
429s: 384
Meaning: Users are being throttled. This is often self-inflicted by overfetch + retry.
Decision: Tune frontend thresholds and retries; add server hints (retry-after) and ensure rate limiting is per-user not global.
Task 5: Check whether responses are cacheable
cr0x@server:~$ curl -sI "http://app.internal/search?q=router&page=2" | egrep -i "cache-control|etag|vary"
Cache-Control: public, max-age=60
ETag: "9a1d-17c2f2c"
Vary: Accept-Encoding
Meaning: Good: cacheable response with ETag; pagination URLs likely cache well.
Decision: Keep pagination URLs stable; for infinite scroll/cursors, consider cache-friendly cursor tokens and short TTLs.
Task 6: Detect personalized cursor tokens killing cache hit rates
cr0x@server:~$ curl -sI "http://app.internal/feed?limit=50&cursor=abc" | egrep -i "cache-control|vary|set-cookie"
Cache-Control: private, no-store
Vary: Authorization
Set-Cookie: session=...
Meaning: The response is explicitly uncacheable and varies by Authorization.
Decision: Accept the cost (capacity plan), or redesign: split personalized data from public card content; cache what you can.
Task 7: Find UI-induced request storms (requests per minute)
cr0x@server:~$ sudo awk '{print $4}' /var/log/nginx/access.log | cut -d: -f1,2 | sort | uniq -c | tail -n 5
812 [29/Dec/2025:09:09
945 [29/Dec/2025:09:10
990 [29/Dec/2025:09:11
1044 [29/Dec/2025:09:12
1202 [29/Dec/2025:09:13
Meaning: Traffic is ramping quickly. If this coincides with a frontend release, suspect infinite scroll triggers/prefetch.
Decision: Roll back or flip the feature flag; then fix thresholds and concurrency limits.
Task 8: Confirm client memory growth via node process RSS (SSR or BFF)
cr0x@server:~$ ps -o pid,rss,cmd -C node | head -n 5
PID RSS CMD
3221 485000 node server.js
3380 512300 node server.js
Meaning: RSS is large and growing; could be server-side rendering caching too much list state per session.
Decision: Stop retaining per-session list state server-side; cache templates or fragments, not user-specific scroll history.
Task 9: Measure API tail latency at the edge (p95/p99 proxy stats)
cr0x@server:~$ sudo awk '$7 ~ /^\/feed/ {print $NF}' /var/log/nginx/access.log | tail -n 5
rt=0.112
rt=0.984
rt=1.203
rt=0.221
rt=1.544
Meaning: Response times vary heavily; p99 will feel like “the app is broken” even if median is fine.
Decision: Fix backend query shape; add timeouts + fallback UI; reduce per-request payload.
Task 10: Confirm disk I/O pressure during deep pagination
cr0x@server:~$ iostat -xm 1 3
Linux 6.5.0 (db01) 12/29/2025 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
12.40 0.00 6.10 9.80 0.00 71.70
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s w_await aqu-sz %util
nvme0n1 520.0 41280.0 0.0 0.00 18.20 79.38 40.0 2048.0 3.10 9.55 88.00
Meaning: High read I/O and high utilization; deep paging may be forcing disk reads and sorts.
Decision: Fix indexes and query plan; add caching; consider read replicas only after query shape is sane.
Task 11: Verify CDN/object caching for images in scroll lists
cr0x@server:~$ curl -sI "http://cdn.internal/images/item123?w=640" | egrep -i "cache-control|age|etag"
Cache-Control: public, max-age=31536000, immutable
ETag: "img-7c21"
Age: 18422
Meaning: Good caching. Infinite scroll will still load many images, but repeat views won’t re-download as much.
Decision: Keep image URLs stable and immutable; avoid generating unique URLs per request.
Task 12: Detect “endless spinner” errors in client logs shipped to server
cr0x@server:~$ sudo grep -E "feed_load_failed|pagination_fetch_error" /var/log/app/client-events.log | tail -n 5
2025-12-29T09:11:22Z feed_load_failed cursor=eyJsYXN0X2lkIjoxMjM0NTYsImxhc3RfY3JlYXRlZF9hdCI6Ii4uLiJ9 status=504
2025-12-29T09:11:25Z feed_load_failed cursor=eyJsYXN0X2lkIjoxMjM0NTYsImxhc3RfY3JlYXRlZF9hdCI6Ii4uLiJ9 status=504
2025-12-29T09:11:28Z feed_load_failed cursor=eyJsYXN0X2lkIjoxMjM0NTYsImxhc3RfY3JlYXRlZF9hdCI6Ii4uLiJ9 status=504
Meaning: Repeated 504s for the same cursor: backend timeout plus client retry loop.
Decision: Add exponential backoff and a user-visible “Retry” button; fix backend timeout cause before increasing timeouts.
Task 13: Confirm the API returns stable ordering keys for cursors
cr0x@server:~$ curl -s "http://app.internal/feed?limit=3" | jq '.items[] | {id, created_at}'
{
"id": 981223,
"created_at": "2025-12-29T09:13:01.002Z"
}
{
"id": 981222,
"created_at": "2025-12-29T09:13:00.991Z"
}
{
"id": 981221,
"created_at": "2025-12-29T09:13:00.990Z"
}
Meaning: The list exposes stable keys; you can build a cursor on (created_at, id).
Decision: Use composite cursor; avoid ordering by mutable fields like updated_at for core pagination.
Task 14: Check whether HTML history anchors exist for SEO and back navigation
cr0x@server:~$ curl -s "http://app.internal/search?q=router&page=2" | grep -Eo 'rel="(next|prev)"' | sort | uniq -c
1 rel="next"
1 rel="prev"
Meaning: The page declares next/prev relationships. That helps crawlers and also clarifies navigation structure.
Decision: Keep this for paginated lists; for infinite scroll, expose equivalent paginated URLs under the hood.
Common mistakes: symptoms → root cause → fix
1) “Back button takes me to the top”
Symptom: Users click an item, return, and lose their place.
Root cause: Scroll position not persisted; URL not updated with an anchor; list state discarded.
Fix: Store scroll position in history state; update URL with current page/cursor anchor; restore from cached items or re-fetch quickly.
2) “I see duplicates / missing items while scrolling”
Symptom: Items repeat, or gaps appear after loading more.
Root cause: Unstable ordering (updated_at), cursor not tied to a unique ordering key, concurrent requests racing, or dedupe missing.
Fix: Use immutable ordering (created_at + id); enforce single in-flight request; dedupe by ID on the client and server.
3) “It loads forever” (endless spinner)
Symptom: Loader spins; nothing new appears; user keeps scrolling.
Root cause: Retry loop on 5xx/timeout; failure state not surfaced; the scroll trigger keeps firing.
Fix: Make failures terminal with “Retry”; add exponential backoff; add circuit breaker behavior; log cursor and request ID.
4) “Page 1 is fast, page 200 is unusable”
Symptom: Deep navigation is slow; p99 blows up.
Root cause: Offset pagination scanning large ranges; missing composite indexes.
Fix: Cursor pagination; add matching index; cap deep page access; offer time-based jump/filter.
5) “Scrolling is janky on mobile”
Symptom: Low FPS, delayed touch response, device heats up.
Root cause: Too many DOM nodes, heavy images, layout thrash, main-thread work on scroll.
Fix: Virtualize; use image placeholders and correct sizes; avoid synchronous work on scroll events; throttle observers.
6) “Analytics are nonsense after we switched to infinite scroll”
Symptom: Conversions drop (or spike) mysteriously; attribution breaks.
Root cause: Pageview-based tracking doesn’t map to infinite scroll; missing events for “items viewed” and “depth.”
Fix: Track exposure events (items rendered/visible), depth reached, and anchor changes; keep URLs updated to maintain semantics.
7) “Our cache hit rate fell off a cliff”
Symptom: CDN/reverse proxy hit ratio drops after rollout.
Root cause: Personalized cursors, Authorization variation, private/no-store headers.
Fix: Split public from private data; cache card payloads separately; keep cursor tokens deterministic; use short TTL where safe.
8) “Users can’t reach the footer”
Symptom: Support says “I can’t find contact/legal/settings.”
Root cause: Infinite scroll removed natural end-of-page.
Fix: Provide a sticky utility bar, a reachable footer via “Pause loading,” or a “Go to footer” affordance.
Checklists / step-by-step plan
Step-by-step: choosing the pattern
- Classify intent: searching/comparing (pagination) vs browsing (infinite or load-more).
- Define “return to where I was”: is this a requirement? If yes, design history anchors up front.
- Pick an API model: cursor-based for large/dynamic datasets; offset only for small, stable lists.
- Decide on stable ordering keys: immutable sort keys with a tiebreaker.
- Set a performance budget: max time to next batch, max requests per session, max DOM nodes.
- Plan caching: what can be public, what must be private, and where TTLs make sense.
- Instrument analytics semantics: exposure, depth, anchor changes, retry behavior.
- Roll out with guardrails: feature flag, canary, and a kill switch.
Checklist: pagination UI that doesn’t irritate people
- URL reflects state (filters/sort/page/cursor).
- Shows total count or a meaningful approximation (and labels it honestly).
- Keyboard-accessible controls, focus management, and ARIA labels.
- Next/prev plus page window; no 200-link monstrosity.
- Prefetch next page only when it won’t create burst traffic.
- Deep page access either supported efficiently or deliberately constrained.
Checklist: infinite scroll that doesn’t melt devices
- Virtualization enabled; DOM node count bounded.
- Single in-flight request; cancels stale calls; dedupes items.
- Clear error states with Retry; no infinite spinner.
- History state + URL anchor updates; Back returns to same place.
- Footer/navigation accessible via a persistent UI element.
- Request budget enforced (max depth, max prefetch, max concurrent media loads).
Checklist: backend requirements for both patterns
- Index matches sort order.
- Cursor tokens include all necessary ordering/filter context or are rejected safely.
- Response includes
has_moreand a next cursor/page pointer. - Rate limiting and retries are coordinated (429 with Retry-After semantics).
- Observability: request IDs, cursor/page in logs, and latency percentiles.
FAQ
1) Should I always prefer infinite scroll on mobile?
No. Mobile users have less patience for slow loads and less memory for huge DOMs. If the task is search/comparison, pagination (or “load more”) is often better.
2) Is “Load more” just lazy pagination?
It’s pagination with a friendlier interaction model. It’s explicit, easier to make accessible, and easier to debug. For many products it’s the sweet spot.
3) Why does offset pagination get slow at high page numbers?
Because the database often has to scan/skip a lot of rows to reach the offset, then sort or filter. Even with indexes, deep offsets can force work proportional to how far you’ve skipped.
4) Will cursor pagination fix duplicates completely?
It fixes many causes, but not all. You still need stable ordering keys and a tiebreaker. And you still need client dedupe if you can issue duplicate requests.
5) How do I make infinite scroll SEO-friendly?
Expose paginated URLs that represent the same content slices, and make them reachable (server-rendered or at least discoverable). Infinite scroll can be the client experience; the crawlable structure should still exist.
6) Do users prefer infinite scroll?
Users prefer whatever helps them finish their job with the least friction. For browsing, infinite can feel smooth. For finding, comparing, and returning, pagination usually wins.
7) What’s the simplest way to stop infinite scroll from causing traffic spikes?
Enforce one in-flight request, prefetch at most one page ahead, and require user intent (scroll threshold) before prefetching. Add backoff and cap retries.
8) Should I show total result counts?
If users make decisions based on scope (“only 23 results” vs “12,000 results”), yes. If counts are expensive, show an estimate or omit and show ranges—don’t lie.
9) Can I keep page numbers with cursor pagination?
You can, but it’s tricky. Cursor pagination doesn’t naturally map to arbitrary page jumps. If page numbers are required, consider storing cursors per page in the client session, or provide time-based jumps instead.
10) What is the best default for enterprise admin tables?
Pagination, with server-side sorting and filtering, and stable URLs. Add “load more” only if you can guarantee Back behavior and performance under deep use.
Conclusion: next steps that won’t waste your quarter
If your list is a tool, ship pagination (or “load more”) with stable URLs and cursor-based APIs. If your list is entertainment, infinite scroll can be fine—but only with virtualization, concurrency control, and real history semantics.
Next steps that pay off quickly:
- Audit your backend queries for deep
OFFSETusage and fix the query shape before you tweak the UI. - Decide and document the ordering key for pagination and make it immutable with a tiebreaker.
- Add an anchor model (page/cursor) to URLs and history state so Back behaves like users expect.
- Instrument depth, exposure, retries, and tail latency—then set a request-per-session budget.
- Roll out with a kill switch. You will thank yourself later, usually at 2:13 a.m.