The most expensive SEO bugs I’ve seen weren’t clever algorithm updates. They were boring frontend decisions:
a button that isn’t a button, a menu that only works with a mouse, a page whose “content” exists mostly in divs and hope.
Google doesn’t rage-quit like a user, but it does quietly index less, understand less, and rank you accordingly.
Accessibility (A11y) work is usually sold as compliance, empathy, or “the right thing.” True, and also: it’s a
ruthless way to remove ambiguity from your DOM. Search engines love unambiguous. So do incident responders.
Why A11y and SEO overlap (and where they don’t)
Search engines are not screen readers, but they share a basic dependency: your site has to explain itself in
machine-readable structure. If your UI relies on visual cues (“that thing over there”) or on implicit behavior
(click handlers on divs), you’ve written a human-only interface. Humans are great, but they don’t run indexing at scale.
The overlap: structure, clarity, and resilience
-
Semantic HTML gives both assistive tech and crawlers a map. Headings, lists, landmarks, buttons,
and form controls reduce guesswork. Guesswork doesn’t rank. -
Readable text alternatives (alt text, link text, aria-label where appropriate) improve comprehension
when images don’t load, JS breaks, or a crawler decides it’s not rendering your app today. -
Performance and stability are shared dependencies. If your page janks, shifts, or never settles,
keyboard users suffer and Core Web Vitals suffer. The latter is a ranking factor; the former is a lawsuit vector. -
Navigation that works without a mouse is usually navigation that’s predictable in the DOM, which
is usually navigation that’s easier to parse and less fragile across devices and bots.
The non-overlap: ARIA is not a ranking hack
Let’s be crisp: adding ARIA attributes does not magically boost SEO. Sometimes ARIA is necessary; often it’s a
band-aid over missing semantics. Use native elements first. ARIA comes second. If your SEO strategy includes
“we’ll add aria-label to everything,” your strategy is “we’ll spend money and still be confused.”
One operational truth applies here: reliability follows defaults. The browser defaults for native elements
are astonishingly good. The defaults for bespoke div soup are… creative.
A paraphrased idea from Richard Cook (systems safety): success hides the work that prevents failure until a small change exposes it
.
A11y and SEO are both places where that work is either done up front—or paid for during an incident.
Interesting facts and history that explain today’s mess
A11y and SEO didn’t “converge” because of a committee. They converged because both care about interpretable structure,
predictable behavior, and content that survives adversity: slow networks, broken JS, old devices, and humans who don’t
use the interface the way your designer does.
-
The term “accessibility” predates modern web apps. Early assistive technologies existed long before
React; the web inherited decades of expectations around keyboard use and text alternatives. - WCAG 1.0 shipped in 1999. That’s older than most build pipelines currently holding your frontend hostage.
- The ADA is from 1990. Many “modern” companies discovered it when a demand letter arrived, not when they designed navigation.
-
HTML5 introduced semantic elements like <nav>, <main>, <article> in 2014.
They weren’t invented for SEO, but they help crawlers and assistive tech agree on what a page is. -
Google started using Core Web Vitals in ranking systems in 2021. CLS and INP/LCP aren’t “a11y metrics,”
but the same bad UI patterns tend to break both. -
ARIA 1.0 became a W3C Recommendation in 2014. It exists because authors kept building custom controls
without the affordances native controls provide. -
Screen readers parse a tree of semantics, not your CSS. If your “button” is a div with a click handler,
it’s invisible to that semantic layer unless you rebuild what the browser would’ve done for free. -
Search engines may not execute your JS the way you do. Rendering can be delayed, limited, or skipped.
Accessible, semantic HTML increases the chance your meaning survives partial rendering. -
CAPTCHAs were born to stop bots; they often stop humans too. When your conversion funnel blocks actual users,
no SEO campaign can save the revenue graph from physics.
Joke #1: We keep inventing new frontend frameworks to avoid writing HTML, and then we spend weeks re-inventing what HTML already did.
Fast diagnosis playbook: find the bottleneck in 15 minutes
When SEO drops or engagement tanks, teams often panic and “ship more content.” If the frontend is semantically broken,
more content is like adding books to a library whose catalog is on fire.
First: confirm the failure mode (indexing vs understanding vs experience)
- Indexing issue: pages missing from search, sudden crawl anomalies, canonical confusion, robots directives.
- Understanding issue: wrong snippets, irrelevant queries, poor rankings despite being indexed.
- Experience issue: bounce rate spikes, conversion drops, Core Web Vitals regressions, mobile pain.
Second: check the DOM’s truth, not the design’s intention
- Is there exactly one
<h1>and does it match the page topic? - Do navigation and primary actions use native
<a>and<button>? - Are images that convey meaning described with meaningful
alt? - Do forms have
<label>and error messaging that is announced?
Third: test the “no-JS” and “slow-JS” reality
- Fetch the raw HTML. Is the main content there, or is it “Loading…”?
- Simulate slow 3G + CPU throttling. Does layout jump? Do menus become unusable?
- Run Lighthouse and axe against the same URL on the same commit. Track deltas.
Fourth: decide what to fix first
Fix the things that block both humans and crawlers:
semantic headings, broken links, inaccessible navigation, missing labels, and catastrophic performance regressions.
“Pixel-perfect focus rings” can wait until your site is findable and operable.
Accessibility fixes that also improve SEO (the practical list)
1) Use real headings, in order, for real structure
Headings are your document outline. For screen reader users, they’re a primary navigation method. For search engines,
they help infer topic and hierarchy.
Do this:
- Exactly one
<h1>per page, describing the unique page topic. - Use
<h2>for major sections,<h3>for subsections. Don’t skip levels to “make it smaller.” - Don’t use headings for styling. If you need big text, use CSS on a
<p>or<div>and keep semantics honest.
Avoid:
multiple <h1> used as decorative hero text, or headings inside repeated components that turn every card title into an <h2>.
That creates a page with 47 “main topics.” Crawlers don’t know which one you mean; users don’t either.
2) Link text that explains itself (and doesn’t repeat “click here”)
Screen readers often list links out of context. Search engines also analyze anchor text to understand relationships.
“Read more” repeated 20 times is not a relationship; it’s a shrug.
- Use descriptive link text: “Pricing for enterprise plans,” not “Learn more.”
- If you must use “Read more,” add visually hidden context: “Read more about incident response.”
- Buttons perform actions; links navigate. Don’t mix them because “it works.”
3) Alt text: not an SEO slot machine, a content contract
Good alt text improves image search, helps non-visual users, and helps when images fail to load. It also makes your content
portable across contexts (reader mode, syndication, low-bandwidth).
- Functional images (icons that act as buttons/links) need alt text that describes the action: “Search,” “Open menu.”
- Informational images need alt that conveys the info succinctly.
- Decorative images should have empty alt (
alt="") so assistive tech can skip them.
Avoid keyword stuffing. If you write “best affordable enterprise cloud storage platform” in alt text for a photo of your office dog,
you deserve the ranking you get.
4) Landmarks: make the page navigable in one keystroke
Use <header>, <nav>, <main>, <footer>, and region labels where needed.
This doesn’t “rank” directly. It reduces confusion, which reduces broken journeys, which improves user signals and conversions.
And it makes audits straightforward.
- One
<main>per page. - If you have multiple navs (top nav + sidebar), label them (e.g.,
aria-label="Primary"). - Add “Skip to content” link as the first focusable element.
5) Forms: labels, errors, and autocomplete aren’t optional
A form that can’t be completed is a conversion bug. SEO brings traffic; accessibility decides whether that traffic can pay you.
Also, search engines increasingly reward sites that don’t feel like traps.
- Every input gets a programmatic label:
<label for>oraria-labelledbyif you must. - Error messages must be linked via
aria-describedbyand announced using live regions when they appear. - Use
autocompletetokens. It reduces friction and helps assistive tech. - Don’t use placeholder text as a label. Placeholders disappear; your users don’t.
6) Keyboard support: your site should work in “no mouse mode”
Keyboard operability is accessibility 101, and it’s also a litmus test for DOM sanity. If the tab order is chaotic,
your markup is chaotic. Chaotic markup tends to produce chaotic rendering and brittle scripts, which becomes SEO and reliability debt.
- Ensure all interactive elements are reachable and operable with keyboard.
- Visible focus styles. Not “subtle.” Visible.
- Don’t trap focus in modals; return focus to the triggering element.
7) Navigation menus: stop building them like games
Mega-menus and fancy navs often ship as pointer-only experiences. Then the team patches them with ARIA roles
and key handlers, and half of it still breaks on Safari with VoiceOver. Use native patterns unless you have a strong reason not to.
- Use
<button>for toggles,<a>for destinations. - Ensure expanded state is reflected (e.g.,
aria-expanded). - Keep nav HTML present in the initial response where feasible; don’t gate your IA behind JS.
8) Don’t hide content in inaccessible tabs/accordions
Collapsible UI is fine. Collapsible UI that removes content from the accessibility tree, breaks focus, or depends on hover is not.
For SEO: if the content exists only after a fragile client-side interaction, some crawlers will miss it.
9) Structured data: align it with visible, accessible content
Structured data is not an a11y feature, but the discipline overlaps: your page should say the same thing to humans,
assistive tech, and machines. If your schema claims five-star reviews but your visible content shows none, you’re creating
distrust signals. Crawlers and users both hate that.
10) Performance is accessibility, and performance is SEO
A slow site is an inaccessible site for users with limited bandwidth, older hardware, or cognitive fatigue. It’s also
a site that loses rankings and conversions. Biggest offenders:
- Large hero images without responsive sizing.
- Client-side rendering that delays content and headings.
- Layout shifts from late-loading fonts, ads, and images without dimensions.
- Hydration and third-party scripts that block interactivity.
11) Don’t break pages with “infinite scroll only”
Infinite scroll can be accessible if implemented carefully, but it’s often shipped without proper pagination semantics.
For SEO, pagination is still a workhorse. For accessibility, predictable navigation is basic decency.
12) Language and metadata: set the basics correctly
Set lang on the document. Ensure titles are unique and match the on-page heading. Use meta descriptions that describe
the content, not the brand manifesto. This helps screen readers pick pronunciation rules and helps search snippets match intent.
Joke #2: Nothing says “premium brand experience” like a modal you can’t close without a mouse.
Three corporate mini-stories from the trenches
Story 1: The incident caused by a wrong assumption (“Google renders everything like Chrome, right?”)
A mid-sized SaaS company rebuilt their marketing site as a single-page app. It looked fantastic. Animations, transitions,
tasteful gradients. The CTO liked it, which is usually either a good sign or the beginning of an incident.
The assumption was simple: “Search engines execute JavaScript like a normal browser.” In development, they tested with
modern Chrome on fast laptops. In production, they shipped client-side rendering for most content, with a skeleton screen
and API calls to populate sections, including the H1 and primary copy.
Within weeks, organic traffic drifted down. Not a cliff, which would’ve been merciful. A slow leak. Sales complained first.
The SEO team blamed content. Engineering blamed “algorithm changes.” Support tickets mentioned blank pages on flaky hotel Wi‑Fi.
No one connected the dots because the site “worked on my machine,” which is the oldest lie in ops.
The fix wasn’t exotic. They added server-side rendering for core content, ensured the HTML response contained the H1 and
main copy, and made navigation links real anchors. Then they audited headings and removed duplicated H1s introduced by a
shared hero component. Rankings stabilized. Conversions improved. The animations remained, because the world is unjust.
The lesson: if your primary content isn’t in the initial HTML, you’re betting revenue on rendering budgets you don’t control.
A11y people have been warning about this for years. SEO people are just now bringing receipts.
Story 2: The optimization that backfired (“We removed outlines and improved CLS… kind of”)
An enterprise ecommerce team had a mandate: improve Core Web Vitals. They got serious. They reduced CSS, optimized images,
and tackled layout shift. Progress was real—until they “optimized” focus styles away.
The reasoning was aesthetic and misguided: focus rings “look ugly” and “cause visual jitter.” Someone also believed
removing outlines would reduce layout shift. It did not. It just removed the only visible indicator of keyboard focus.
What happened next was predictable: keyboard users couldn’t tell where they were. Support calls rose about checkout issues
that sounded like user error but weren’t. Meanwhile, session recordings showed rage clicks and repeated tabbing.
Bounce rate rose in a way analytics couldn’t fully explain because keyboard friction doesn’t always show up as a clean funnel drop.
Then the compliance team got involved. The fix was small: restore focus styles, ensure they don’t cause layout shift by using
outline (which doesn’t affect layout) instead of borders, and test in keyboard-only mode as a release gate.
The lesson: “optimization” that reduces visible affordances is not optimization. It’s sabotage with a performance ticket attached.
If you want to reduce CLS, reserve space for images and fonts. Don’t blind the user.
Story 3: The boring but correct practice that saved the day (release gates + regression budgets)
A fintech company had a weekly release cadence and a habit of shipping small UI experiments. Experiments are fine.
Unmeasured experiments are how you end up with a Frankenstein interface.
They instituted a boring practice: every PR affecting frontend templates ran two automated checks in CI:
Lighthouse (performance + basic accessibility) and axe-core for accessibility rules. Failing thresholds blocked merges.
No exceptions, no “just this once,” no “but it’s Friday.”
One week, a developer introduced a “simple” component: a card grid where each card was clickable. They implemented it as
<div onClick> with nested links. It worked with a mouse. It also produced invalid interactive nesting, broke
keyboard navigation, and confused assistive tech.
The CI checks failed immediately: missing button semantics, duplicate link targets, and a broken tab order. The developer
converted the card to a real link with a sensible clickable area and removed nested interactive elements. The release shipped.
Nobody celebrated. That’s the point.
The lesson: the most valuable A11y and SEO work is regression prevention. Your future self will never thank you loudly, but they will sleep.
Common mistakes: symptom → root cause → fix
1) Symptom: “We’re indexed, but rankings are poor and snippets look wrong”
- Root cause: headings are decorative; the page outline is nonsense; multiple H1s; key content hidden behind JS.
- Fix: enforce heading hierarchy, render core content server-side or pre-render, ensure the title and H1 match topic.
2) Symptom: “Users bounce on mobile; CWV regressed; support says ‘site is glitchy’”
- Root cause: layout shift from images without dimensions, late-loading fonts, or injecting banners/modals after load.
- Fix: set width/height or aspect-ratio, preload critical fonts responsibly, reserve space for dynamic UI, delay noncritical scripts.
3) Symptom: “Keyboard users can’t use the menu / forms”
- Root cause: clickable divs, missing focus management, hover-only interactions, focus styles removed.
- Fix: use native elements, implement proper focus trapping/return, add visible focus using outline, test tab order.
4) Symptom: “Image search traffic is low; product images don’t show well”
- Root cause: missing or junk alt text, decorative images given noisy alt, lazy-loading applied indiscriminately.
- Fix: write meaningful alt for informative images, use alt=”” for decorative, avoid lazy-loading above-the-fold hero images.
5) Symptom: “Crawl budget seems wasted; lots of near-duplicates”
- Root cause: faceted navigation creates infinite URL variants; canonical tags wrong; pagination hidden in JS.
- Fix: constrain facets, set canonicals correctly, provide crawlable pagination links, use robots directives intentionally.
6) Symptom: “We added ARIA everywhere; audits still fail”
- Root cause: ARIA used as a substitute for native semantics; roles conflict with elements; labels duplicate or missing.
- Fix: remove unnecessary ARIA, use native controls, apply ARIA only when building truly custom components with full keyboard support.
7) Symptom: “Modal popups tank conversions and time-on-site”
- Root cause: focus trap missing, close button not reachable, ESC key not handled, background scroll not controlled.
- Fix: accessible modal pattern: focus moves in, is trapped, returns to trigger, supports ESC, close button is first-class.
8) Symptom: “Internal search works, but external search doesn’t find deep content”
- Root cause: content rendered only after client-side API calls; deep pages not linked or only reachable via scripts.
- Fix: ensure crawlable links, server-render key pages, generate sitemaps, avoid gating navigation behind click handlers.
Hands-on tasks with commands: what to run, what it means, what to do
These tasks are designed for a production-minded workflow: verify with a command, interpret output, then make a decision.
You can run most of them from a CI runner or a debugging host. They won’t fix your DOM for you, but they will stop you from arguing with opinions.
Task 1: Fetch raw HTML and verify main content exists without JS
cr0x@server:~$ curl -sS -L -A "Mozilla/5.0 (compatible; DebugBot/1.0)" https://www.example.com/product/widget | sed -n '1,80p'
<!doctype html>
<html lang="en">
<head>
<title>Widget - Example</title>
...
</head>
<body>
<main>
<h1>Widget</h1>
<p>Our Widget helps you...</p>
What the output means: You can see <main>, <h1>, and real copy in the initial response.
That’s good for crawlers and for users on bad networks.
Decision: If you only see a skeleton (“Loading…”) and scripts, prioritize SSR/pre-rendering or at least static HTML for core content.
Task 2: Verify canonical and robots directives in headers
cr0x@server:~$ curl -sSI https://www.example.com/blog/post | egrep -i 'x-robots-tag|cache-control|content-type'
content-type: text/html; charset=utf-8
cache-control: max-age=0, must-revalidate
What the output means: No X-Robots-Tag header is present here. That’s normal.
Decision: If you see X-Robots-Tag: noindex on pages you want indexed, stop everything and fix the header at the CDN/app layer.
Task 3: Check meta robots and canonical in HTML
cr0x@server:~$ curl -sS https://www.example.com/blog/post | egrep -i 'rel="canonical"|name="robots"' | head
<link rel="canonical" href="https://www.example.com/blog/post">
<meta name="robots" content="index,follow">
What the output means: Canonical points to itself; robots allow indexing.
Decision: If canonical points to a different page unintentionally (common with UTM stripping gone wrong), fix templates and reindex.
Task 4: Validate heading count quickly (rough but useful)
cr0x@server:~$ curl -sS https://www.example.com/product/widget | pup 'h1,h2,h3 text{}' | nl | sed -n '1,30p'
1 Widget
2 Features
3 Reliability
4 Pricing
5 FAQ
What the output means: You’re getting a clean outline extract. That’s a sanity check that the DOM has real structure.
Decision: If the output shows repeated headings from cards (“Learn more” as H2 everywhere), refactor components and demote headings.
Task 5: Run Lighthouse in CI mode for accessibility and performance
cr0x@server:~$ lighthouse https://www.example.com/ --only-categories=accessibility,performance --output=json --output-path=./lh.json --chrome-flags="--headless=new"
Lighthouse CLI v12.0.0
✔ Generated report to ./lh.json
What the output means: A JSON report exists; you can diff it across commits and extract category scores and audits.
Decision: Set budgets. If performance or a11y drops beyond a threshold, fail the build and fix before release.
Task 6: Extract key Lighthouse audits (LCP/CLS plus a11y signals)
cr0x@server:~$ jq -r '.categories.accessibility.score, .categories.performance.score, .audits["largest-contentful-paint"].displayValue, .audits["cumulative-layout-shift"].displayValue' lh.json
0.92
0.74
3.8 s
0.21
What the output means: Accessibility is decent; performance is weak; LCP and CLS are outside “good” targets.
Decision: Treat CLS > 0.1 and LCP > 2.5s as a performance/a11y incident for key landing pages. Investigate layout shifts and render-blocking work.
Task 7: Run axe-core via CLI on a URL
cr0x@server:~$ npx axe https://www.example.com/ --exit
✔ 0 violations found!
What the output means: Basic automated checks passed. This is not “fully accessible,” but it’s a good regression guardrail.
Decision: If violations appear, wire this into CI and block merges for new violations. Don’t let the backlog become a landfill.
Task 8: Identify missing alt attributes quickly
cr0x@server:~$ curl -sS https://www.example.com/ | pup 'img attr{alt}' | head
Summer campaign banner
Product photo: Widget in use
What the output means: At least some images have alt. This command doesn’t show missing alts directly—absence is the problem.
Decision: If you suspect missing alt, search for <img tags without alt in templates and enforce lint rules (e.g., eslint-plugin-jsx-a11y).
Task 9: Confirm gzip/brotli and payload size hints
cr0x@server:~$ curl -sSI -H 'Accept-Encoding: br,gzip' https://www.example.com/ | egrep -i 'content-encoding|content-length|vary'
vary: Accept-Encoding
content-encoding: br
What the output means: Brotli compression is active; the response varies on encoding. Good baseline for performance and accessibility on slow links.
Decision: If there’s no compression, fix CDN/web server config. Then re-check Lighthouse performance.
Task 10: Test if critical CSS/JS is cacheable (performance stability)
cr0x@server:~$ curl -sSI https://www.example.com/assets/app.9c1f3d2.js | egrep -i 'cache-control|etag|last-modified'
cache-control: public, max-age=31536000, immutable
etag: "a1b2c3d4"
What the output means: Fingerprinted assets are cached aggressively. That reduces repeat-visit latency and helps users who navigate multiple pages.
Decision: If assets are no-cache, you’re forcing unnecessary downloads and increasing the chance of slow, broken experiences.
Task 11: Spot heavy third-party scripts (a11y + CWV offender)
cr0x@server:~$ curl -sS https://www.example.com/ | egrep -o '<script[^>]+src="[^"]+"' | sed 's/.*src="//;s/"$//' | head
https://cdn.example.com/assets/app.9c1f3d2.js
https://thirdparty.example.net/tracker.js
https://chat.example.org/widget.js
What the output means: You can see third-party scripts being loaded. These often harm INP, add layout shift, and create focus issues.
Decision: Audit necessity. Defer, lazy-load, or remove. If a script adds a modal, force it to meet keyboard and focus requirements.
Task 12: Validate robots.txt quickly for accidental blocks
cr0x@server:~$ curl -sS https://www.example.com/robots.txt | sed -n '1,80p'
User-agent: *
Disallow: /admin/
Disallow: /internal/
Sitemap: https://www.example.com/sitemap.xml
What the output means: Reasonable disallows, sitemap declared.
Decision: If you see Disallow: / in production, treat it like a Sev-1. Roll back or hotfix immediately.
Task 13: Verify sitemap returns 200 and is parseable
cr0x@server:~$ curl -sSI https://www.example.com/sitemap.xml | head
HTTP/2 200
content-type: application/xml
What the output means: The sitemap is reachable and served as XML. Basic hygiene.
Decision: If it’s 404/500 or served as HTML, fix routing and caching. Then re-submit in your search console tooling.
Task 14: Detect SPA-style “title never changes” issues
cr0x@server:~$ for u in https://www.example.com/ https://www.example.com/pricing https://www.example.com/product/widget; do echo "== $u"; curl -sS $u | pup 'title text{}'; done
== https://www.example.com/
Example
== https://www.example.com/pricing
Example
== https://www.example.com/product/widget
Example
What the output means: Every page has the same title. That’s bad for SEO and confusing for assistive tech users who rely on titles.
Decision: Implement per-route titles server-side or via proper head management, and ensure the visible H1 aligns with the title.
Checklists / step-by-step plan
Step-by-step plan for the next two sprints
Sprint 1: Stabilize semantics and stop the bleeding
- Lock in a heading policy. One H1 per page, consistent H2/H3 outline. Add a lint rule or review gate.
- Fix interactive elements. Replace clickable divs with buttons/links. Remove nested interactive elements.
- Add/repair skip links and landmarks. One main, labeled navs, predictable structure.
- Form hygiene. Labels, autocomplete, accessible errors, focus management for validation.
- Restore focus styles. Use outline, not border. Ensure contrast.
- Automate regression checks. Lighthouse + axe in CI, with thresholds and “no new violations” rules.
Sprint 2: Performance and crawlability improvements that compound
- Fix CLS at the source. Reserve space for images/ads, stable fonts, avoid late DOM injections above the fold.
- Reduce JS cost. Remove third-party scripts, code split, defer noncritical features, measure INP.
- Make navigation crawlable. Ensure primary routes are real links in HTML, not JS-only click handlers.
- Alt text program. Define rules: decorative vs informative vs functional images. Enforce in components.
-
Title + meta consistency. Unique titles, accurate meta descriptions, correct
lang, canonical sanity. - Pagination strategy. Provide accessible pagination controls and ensure deep pages are reachable and indexable.
Release gate checklist (print this, annoy your team)
- Keyboard-only smoke test: nav, primary CTA, forms, modal close.
- One H1, sane outline, no heading spam in repeated components.
- Links have descriptive text; no “click here” patterns at scale.
- Images: meaningful alt where needed, empty alt for decorative.
- Lighthouse budgets: accessibility and performance above thresholds.
- No accidental
noindexheaders/meta, canonical correct. - CLS controlled on key templates; images have dimensions.
- Third-party scripts reviewed; no new blocking tags without justification.
FAQ
1) Does improving accessibility directly improve rankings?
Not as a single “a11y score” ranking factor. But the fixes that make a site accessible—semantic structure, crawlable navigation,
stable rendering, usable forms—often improve how content is discovered, parsed, and experienced. That improves the things that do matter.
2) Is ARIA good for SEO?
ARIA is mostly for assistive technologies, not search engines. Use it to make custom widgets accessible when native elements
can’t do the job. If you can use a native element, do that. It’s faster, more reliable, and usually more compatible.
3) If we already have SSR, are we done?
SSR helps, but you can still ship a semantically broken page: wrong headings, inaccessible controls, missing labels, and focus traps.
SSR is a transport mechanism. A11y is a design and implementation discipline.
4) Should every image have alt text?
Every <img> should have an alt attribute. Sometimes that alt is empty (alt="") for decorative images.
Functional and informative images need meaningful alt that matches what the image contributes.
5) We use icon-only buttons. What’s the right pattern?
Use a real <button> and provide an accessible name via visible text, aria-label, or aria-labelledby.
Ensure focus is visible and the button is reachable via keyboard. Then test with a screen reader at least once per release cycle.
6) Can “skip to content” affect SEO?
Indirectly. Skip links improve keyboard usability and reduce bounce/friction, especially on pages with large headers and nav.
They also force teams to maintain a coherent <main> region, which is good structural hygiene.
7) What’s the fastest a11y fix with the biggest SEO upside?
Clean up headings and navigation semantics, and ensure the initial HTML contains the meaningful content and H1. That improves
parsing and reduces dependence on fragile client-side rendering.
8) How do we prevent regressions without turning releases into a committee?
Automate. Run Lighthouse and axe in CI, set thresholds, and block merges on new violations. Pair that with one human keyboard-only
smoke test for key flows. It’s faster than triaging a quarter’s worth of “why did traffic drop?” meetings.
9) Do Core Web Vitals improvements help accessibility?
Often. Lower CLS reduces motion and confusion. Better INP reduces input lag that can be devastating for users relying on keyboard,
switch devices, or voice input. But don’t “optimize” by removing affordances like focus indicators.
10) We have a design system. Why are we still failing audits?
Design systems frequently ship components that look consistent but behave inconsistently: missing labels, non-semantic containers,
custom selects without keyboard support. Audit the system itself, not just product pages. Fix once; benefit everywhere.
Conclusion: next steps that survive a quarter’s worth of meetings
If you want A11y work that also moves SEO, stop treating accessibility as a compliance side quest and start treating it like
frontend reliability engineering. Your DOM is an API. Make it stable, semantic, and testable.
Do this next, in order
- Run the fast diagnosis playbook on your top 5 landing pages: raw HTML, headings, navigation semantics, CLS/LCP/INP.
- Fix the structural blockers: one H1, sane outline, real links/buttons, labeled forms, visible focus, skip link + landmarks.
- Put a gate in CI: Lighthouse + axe, with budgets and “no new violations.” Regression prevention beats heroics.
- Trim third-party scripts and stabilize layout. Performance work that reduces jank is accessibility work and SEO work.
- Institutionalize the boring checks: keyboard-only smoke test for every release, especially for nav, modals, and checkout.
The payoff isn’t just rankings. It’s fewer fragile UI incidents, fewer “works on my machine” debates, and a site that behaves like
a professional product instead of a demo that accidentally went live.