Google Search Console “Page with redirect”: When It’s Fine and When It Hurts

Was this helpful?

You open Google Search Console, click Pages, and there it is: “Page with redirect”.
Not indexed. Not eligible. Not invited to the party. Meanwhile your PM is asking why traffic dipped, your
marketing team is refreshing dashboards like it’s a sport, and you’re staring at a perfectly “working” redirect
in the browser.

Here’s the production-systems take: “Page with redirect” is often normal and even desirable. But it becomes toxic
when you’ve accidentally taught Google that your preferred URLs are unstable, contradictory, or slow to resolve.
This is the difference between a clean canonicalization layer and a redirect maze built by committee.

What “Page with redirect” actually means

In Search Console, “Page with redirect” is an indexing status, not a moral judgment.
It means Google tried to fetch a URL and got a redirect response instead of a final content response
(typically a 200). Google then decided the original URL is not the canonical, indexable URL. So it doesn’t index
that starting URL; it follows the redirect and (maybe) indexes the destination.

That’s why this report often contains lots of URLs you intentionally don’t want indexed: the old HTTP variants,
old paths after a migration, trailing-slash variants, uppercase/lowercase variants, and parameterized junk.

The key operational question is not “How do I make that status go away?” It’s:
Are the right destination URLs getting indexed, ranked, and served reliably?

What it is not

  • Not an error by default. A redirect is a valid response.
  • Not proof Google is confused. It’s proof Google is paying attention.
  • Not a guarantee the destination is indexed. The redirect can be followed into a dead end.

How Google treats it under the hood (practical version)

Googlebot fetches URL A, receives a redirect to URL B, and records a relationship.
If signals are consistent (redirect is stable, B returns 200 and is canonical, internal links point to B, sitemap
lists B, hreflang points to B, etc.), Google tends to consolidate indexing signals onto B and drop A.
If signals are inconsistent, you get the Search Console equivalent of a shrug.

One engineering reality that SEO folks sometimes gloss over: redirects are not free. They cost crawl budget,
add latency, can be cached weirdly, and can create edge-case behavior across CDNs, browsers, and bots. In
production, the simplest redirect is the one you don’t need.

When it’s fine (and you should ignore it)

“Page with redirect” is healthy when it represents intentional canonicalization or
planned migration behavior, and the destination URLs are indexed and performing.
You want these redirects because they compress URL variants into one preferred URL.

Scenarios where it’s normal

  • HTTP → HTTPS. You want the HTTP URLs to redirect forever.
    GSC will often show the HTTP URLs as “Page with redirect.” That’s fine.
  • www → non-www (or the reverse). Again, the non-preferred host should redirect.
  • Trailing slash normalization. Pick one and redirect the other.
  • Old URL structure after a migration. Old paths redirect to new paths.
  • Parameter cleanup (some tracking params, session ids). Redirect or canonicalize depending on semantics.
  • Localized or region routing when done carefully (and not based on flaky IP heuristics).

What “fine” looks like in metrics

  • Destination URLs appear under Indexed and show impressions/clicks.
  • Redirects are single-hop (A → B), not A → B → C → D.
  • Redirect type is mostly 301/308 for permanent moves (with rare exceptions).
  • Internal links, canonicals, and sitemaps overwhelmingly point to the destination URLs.
  • Server logs show Googlebot successfully fetching the destination content (200).

If those conditions are true, resist the urge to “fix” the report by trying to index the redirected URLs.
Indexing old URLs is like keeping your old pager number because some people still have it.
You don’t want that life.

When it hurts (and how it shows up)

“Page with redirect” hurts when it’s masking misalignment between what you want indexed and what
Google can reliably fetch, render, and trust as canonical. It also hurts when redirects are being used as duct tape
for deeper problems (duplicate content, misrouted locales, broken internal links, inconsistent protocol/host).

High-impact failure modes

1) Redirect chains and crawl waste

Chains happen when multiple normalization rules stack: HTTP → HTTPS → www → add trailing slash → rewrite to new path.
Each hop adds latency and failure probability. Google will follow chains, but not forever, and not without cost.
Chains also increase the chance you’ll accidentally create a loop.

2) Redirect to a non-indexable destination

Redirecting to a URL that returns 404, 410, 5xx, blocked by robots.txt, or “noindex” is how you quietly delete pages.
GSC will show the starting URL as “Page with redirect,” but your real problem is that the destination can’t be indexed.

3) 302/307 used as a “permanent” redirect

Temporary redirects are not always bad, but they’re easy to misuse. If you keep a 302 in place for months, Google may
eventually treat it like a 301, or it may keep the old URL in the index longer than you want. That’s not a strategy;
it’s indecision in HTTP form.

4) Mixed signals: redirect says one thing, canonical says another

If URL A redirects to B, but B’s canonical points back to A (or to C), you’ve created a canonicalization argument.
Google will pick a winner. It might not be your favorite.

5) Redirects triggered by user-agent, geo, cookies, or JS

Conditional redirects are the fastest way to create “works on my laptop” SEO. Googlebot is not your browser.
Your CDN edge is not your origin. Your origin is not your staging environment. If the redirect depends on conditions,
you must test it the way Google sees it.

6) Sitemaps full of URLs that redirect

A sitemap is supposed to list canonical, indexable URLs. When you feed Google thousands of redirected URLs in the sitemap,
you’re effectively sending it on errands. It will comply for a while, then quietly deprioritize you.

Joke #1: Redirect chains are like corporate approval chains—nobody knows who added the last hop, but everyone suffers the latency.

What “hurts” looks like in outcomes

  • Preferred URLs are not indexed, or indexed intermittently.
  • Coverage report shows spikes in “Duplicate, Google chose different canonical” alongside redirects.
  • Performance report shows impressions dropping for migrated pages, not recovering.
  • Server logs show Googlebot hitting the redirecting URLs repeatedly instead of the canonical destinations.
  • Large portions of crawl activity are spent on redirects rather than content URLs.

Redirect physics: 301/302/307/308, caching, and canonicalization

Status codes, in the way operators think about them

  • 301 (Moved Permanently): “This is the new address.” Cached aggressively by clients and often by intermediaries. Good for canonical moves.
  • 302 (Found): “Temporarily over here.” Historically treated as temporary. Search engines have become more flexible, but your intent matters.
  • 307 (Temporary Redirect): Like 302 but preserves method semantics more strictly. Mostly relevant for APIs.
  • 308 (Permanent Redirect): Like 301 but preserves method semantics more strictly. Increasingly common.

Canonical tags vs redirects: pick your weapon

A redirect is a server-side instruction: “Don’t stay here.” A canonical is a hint embedded in a page: “Index that other one.”
If you can redirect safely (no user impact, no functional reason to keep the old URL), do it. It’s stronger and cleaner.
Use canonicals for cases where the duplicate must remain accessible (filters, sorts, tracking parameters, printable views).

But don’t mix them casually. If you redirect A → B, then B’s canonical should almost always be B. You want a single story.

Latency and reliability: why SREs care about “just a redirect”

Every hop is another request that can fail, another TLS handshake, another cache lookup, another place for a misconfigured header
to break things. Multiply by Googlebot’s crawl rate and your own user traffic and you get a real cost.

One quote worth pinning to a dashboard: Hope is not a strategy. — Gene Kranz.
Redirects used to “hope” search engines figure it out are a slow-motion incident.

Cache behavior you can’t ignore

Permanent redirects can be cached for a long time. If you accidentally ship a bad 301, you don’t just fix the server and move on.
Browsers, CDNs, and bots may keep following the old path. That’s why redirect changes deserve change management.

Fast diagnosis playbook

When “Page with redirect” looks suspicious, don’t start by rewriting half your rewrite rules. Start with evidence.
This sequence finds the bottleneck quickly, even when multiple teams have touched the stack.

1) Confirm the final destination and hop count

  • Take a sample of affected URLs from GSC.
  • Follow redirects and record: hop count, status codes, final URL, final status.
  • If hop count > 1, you already have actionable work.

2) Validate the destination is indexable

  • Final status must be 200.
  • No “noindex” header or meta tag.
  • Not blocked by robots.txt.
  • Canonical points to itself (or a clearly intended canonical).

3) Check whether your own site is sabotaging you

  • Internal links should point to final destinations, not redirecting URLs.
  • Sitemaps should list canonical URLs, not redirecting ones.
  • Hreflang (if used) should reference canonical destinations.

4) Look at server logs for Googlebot behavior

  • Is Googlebot repeatedly hitting the redirecting URLs? That suggests discovery is still pointing to them.
  • Is Googlebot failing on the destination (timeouts, 5xx, blocked)? That’s a reliability issue, not “SEO.”

5) If it’s a migration: compare old vs new coverage

  • Old URLs should be “Page with redirect.” New URLs should be indexed.
  • If new URLs are not indexed, you likely have one of: noindex, robots block, weak internal linking, or canonical conflicts.

Hands-on tasks: commands, outputs, decisions (12+)

These are the checks I actually run. Each task includes: the command, what typical output means, and what decision you make next.
Replace example domains and paths with your own. The commands assume you can reach the site from a shell.

Task 1: Follow redirects and count hops

cr0x@server:~$ curl -sSIL -o /dev/null -w "final=%{url_effective} code=%{http_code} redirects=%{num_redirects}\n" http://example.com/Old-Path
final=https://www.example.com/new-path/ code=200 redirects=2

Meaning: Two redirects happened. Final is 200.
Decision: If redirects > 1, try to collapse rules so the first response points directly to the final URL.

Task 2: Print the full redirect chain (see each Location)

cr0x@server:~$ curl -sSIL http://example.com/Old-Path | sed -n '1,120p'
HTTP/1.1 301 Moved Permanently
Location: https://example.com/Old-Path
HTTP/2 301
location: https://www.example.com/new-path/
HTTP/2 200
content-type: text/html; charset=UTF-8

Meaning: HTTP→HTTPS then host/path normalization.
Decision: Change the first redirect to go straight to the final host/path, not an intermediate.

Task 3: Detect redirect loops

cr0x@server:~$ curl -sSIL --max-redirs 10 https://www.example.com/loop-test/ | tail -n +1
curl: (47) Maximum (10) redirects followed

Meaning: Loop or excessive chain.
Decision: Treat as an incident: identify the rule set (CDN, load balancer, origin) creating the loop and fix it before thinking about indexing.

Task 4: Verify canonical tag on destination

cr0x@server:~$ curl -sS https://www.example.com/new-path/ | grep -i -m1 'rel="canonical"'
<link rel="canonical" href="https://www.example.com/new-path/" />

Meaning: Canonical points to itself. Good.
Decision: If canonical points elsewhere unexpectedly, fix templates or headers; otherwise Google may ignore your redirect intent.

Task 5: Check for noindex on destination (meta tag)

cr0x@server:~$ curl -sS https://www.example.com/new-path/ | grep -i -m1 'noindex'

Meaning: No output means “noindex” meta tag not found in the first match.
Decision: If you do see noindex, stop. That’s why it won’t index. Fix release config, CMS flags, or environment leakage from staging.

Task 6: Check X-Robots-Tag header for noindex

cr0x@server:~$ curl -sSIL https://www.example.com/new-path/ | grep -i '^x-robots-tag'
X-Robots-Tag: index, follow

Meaning: Headers allow indexing.
Decision: If you see “noindex”, fix it at the source (app, CDN rules, or security middleware). Headers override good intentions.

Task 7: Confirm robots.txt isn’t blocking the destination

cr0x@server:~$ curl -sS https://www.example.com/robots.txt | sed -n '1,120p'
User-agent: *
Disallow: /private/

Meaning: Basic robots.txt shown.
Decision: If the destination path is disallowed, Google may still see the redirect but won’t crawl content. Update robots.txt and re-test in GSC.

Task 8: Check whether your sitemap lists redirecting URLs

cr0x@server:~$ curl -sS https://www.example.com/sitemap.xml | grep -n 'http://example.com' | head
42:  <loc>http://example.com/old-path</loc>

Meaning: Sitemap contains non-canonical URLs (HTTP host).
Decision: Regenerate sitemaps to list only final canonical URLs. This is low-risk, high-return cleanup.

Task 9: Spot internal links that still point to redirecting URLs

cr0x@server:~$ curl -sS https://www.example.com/ | grep -oE 'href="http://example.com[^"]+"' | head
href="http://example.com/old-path"

Meaning: Home page still links to old HTTP URL.
Decision: Fix internal link generation (templates, CMS fields). Internal links are your own crawl budget donation to the redirect fund.

Task 10: Check redirect type (301 vs 302) at the edge

cr0x@server:~$ curl -sSIL https://example.com/old-path | head -n 5
HTTP/2 302
location: https://www.example.com/new-path/

Meaning: Temporary redirect in place.
Decision: If the move is permanent, change to 301/308. If it truly is temporary (maintenance, A/B), make sure it’s time-bounded and monitored.

Task 11: Confirm the destination returns 200 consistently (not 403/500 sometimes)

cr0x@server:~$ for i in {1..5}; do curl -sS -o /dev/null -w "%{http_code} %{time_total}\n" https://www.example.com/new-path/; done
200 0.142
200 0.151
200 0.139
500 0.312
200 0.145

Meaning: Intermittent 500. That’s a reliability bug.
Decision: Don’t argue with GSC until the origin is stable. Fix upstream errors, then request reindexing.

Task 12: Inspect Nginx redirect rules for double normalization

cr0x@server:~$ sudo nginx -T 2>/dev/null | grep -nE 'return 301|rewrite .* permanent' | head -n 20
123:    return 301 https://$host$request_uri;
287:    rewrite ^/Old-Path$ /new-path/ permanent;

Meaning: Multiple redirect directives may stack (protocol + path).
Decision: Combine into a single canonicalization redirect where possible, or ensure order prevents multi-hop.

Task 13: Inspect Apache rewrite rules for unintended matches

cr0x@server:~$ sudo apachectl -S 2>/dev/null | sed -n '1,80p'
VirtualHost configuration:
*:443                  is a NameVirtualHost
         default server www.example.com (/etc/apache2/sites-enabled/000-default.conf:1)

Meaning: Confirms which vhost is default; wrong default vhost can create host redirects you didn’t plan.
Decision: Ensure canonical host vhost is correct and that non-canonical hosts explicitly redirect to it in one hop.

Task 14: Use access logs to see whether Googlebot is stuck on redirecting URLs

cr0x@server:~$ sudo awk '$9 ~ /^30/ && $0 ~ /Googlebot/ {print $4,$7,$9,$11}' /var/log/nginx/access.log | head
[27/Dec/2025:09:12:44 /old-path 301 "-" 
[27/Dec/2025:09:12:45 /old-path 301 "-" 

Meaning: Googlebot repeatedly requests the redirecting URL. Discovery sources still point there.
Decision: Fix internal links and sitemaps; consider updating external references if you control them (profiles, owned properties).

Task 15: Check for inconsistent behavior by user-agent (dangerous “smart” redirects)

cr0x@server:~$ curl -sSIL -A "Mozilla/5.0" https://www.example.com/ | head -n 5
HTTP/2 200
content-type: text/html; charset=UTF-8
cr0x@server:~$ curl -sSIL -A "Googlebot/2.1" https://www.example.com/ | head -n 5
HTTP/2 302
location: https://www.example.com/bot-landing/

Meaning: Different responses for Googlebot. That’s a red flag unless you have a very legitimate reason.
Decision: Remove UA-based redirect logic; it can look like cloaking, and it creates indexing instability.

Task 16: Validate HSTS can’t be blamed for your redirect confusion

cr0x@server:~$ curl -sSIL https://www.example.com/ | grep -i '^strict-transport-security'
Strict-Transport-Security: max-age=31536000; includeSubDomains

Meaning: HSTS is enabled, so browsers will force HTTPS after first contact.
Decision: Don’t “debug” redirects only in a browser; use curl from a clean environment. HSTS can hide HTTP behavior and make you chase ghosts.

Three corporate mini-stories from the redirect trenches

Incident: the wrong assumption (“Google will just figure it out”)

A mid-sized SaaS company migrated from a legacy CMS to a modern framework. The plan looked clean:
old URLs would redirect to new ones, and the new site would be faster. Engineering implemented redirects in the app layer,
and QA verified in a browser. Everyone went home.

In week one, Search Console started filling with “Page with redirect” and impressions dropped for high-value pages.
The SEO team panicked and demanded “remove redirects.” That would have been the wrong fire extinguisher.
The SRE on call did the unglamorous thing: pulled server logs for Googlebot and replayed requests with curl.

The assumption that broke them: “If it works in the browser, Googlebot sees the same thing.”
Their CDN had bot mitigation rules that treated unknown user-agents differently during traffic spikes.
When the origin was slow, the edge returned a temporary redirect to a generic “please try again” page—fine for humans,
terrible for indexing. Googlebot followed the redirect and found thin content.

The fix wasn’t “SEO magic.” It was production hygiene:
they exempted verified bots from the mitigation redirect, improved caching for the new pages, and stopped redirecting to a generic fallback.
After that, “Page with redirect” remained for the old URLs (expected), while the new URLs stabilized and reindexed.

Optimization that backfired: collapsing query parameters with redirects

An e-commerce org had a parameter problem: endless URLs like ?color=blue&sort=popular&ref=ads.
Crawl stats looked ugly, and someone proposed a “simple” fix: redirect any URL with parameters to the parameterless category page.
One rewrite rule to rule them all.

It shipped fast. Too fast. Conversion dipped. Organic traffic on long-tail category variants fell off a cliff.
Search Console showed many “Page with redirect,” but the real damage was that they were redirecting away real user intent.
Some parameter combinations represented meaningful filtered pages that users searched for (and that had unique inventory).

Worse, the redirect rule triggered chains: parameter URL → clean category → geo-based redirect → localized category.
Googlebot spent more time bouncing than crawling. Latency increased. The site looked “unstable.”

The rollback was uncomfortable but necessary. They replaced the blunt redirect with a policy:
strip only known tracking parameters (utm/ref), keep functional filters indexable only where content justified it,
and use canonical tags for duplicates. Suddenly “Page with redirect” was limited to the junk URLs, not the revenue ones.

Boring but correct practice: sitemap and internal link hygiene saved the day

A publishing platform did a domain consolidation: four subdomains into one canonical host.
They implemented 301 redirects and expected turbulence. The twist: they treated it like an operational change, not an SEO wish.

Before launch, they generated a mapping table (old → new), ran automated redirect tests, and updated internal links in templates.
Not just the nav. Footer, related-article modules, RSS feeds, everything. They also regenerated sitemaps to include only canonical URLs
and shipped them with the same deploy.

After launch, Search Console filled with “Page with redirect” for old hosts (as expected), but the new host indexed quickly.
Crawl stats showed Googlebot moved on from the old URLs faster than in their previous migrations.
Their log-based monitoring showed a steep decline in redirect hits over weeks, meaning discovery sources were clean.

The lesson wasn’t glamorous: the boring work prevents the exciting outage.
Redirects are a bridge. You still have to move the traffic, the links, and the signals to the new side.

Common mistakes: symptom → root cause → fix

1) Symptom: “Page with redirect” spikes after a deploy

Root cause: A new rule introduced multi-hop redirects or loops (often trailing slash + locale + host normalization).
Fix: Run redirect-chain tests on a URL sample; collapse to one hop; add regression tests in CI for canonical URLs.

2) Symptom: Old URLs show “Page with redirect,” but new URLs are “Crawled — currently not indexed”

Root cause: Destination pages are low-quality/thin, blocked, slow, or have canonical/noindex contradictions.
Fix: Verify destination returns 200, is indexable, has self-canonical, and content is substantial. Fix performance and templates.

3) Symptom: GSC shows “Page with redirect” for URLs that should be final

Root cause: Internal links or sitemap are pointing to non-canonical variants, so Google keeps discovering the wrong version first.
Fix: Update internal links, sitemap, hreflang, and structured data to reference canonical destinations only.

4) Symptom: Redirects work in browser, fail for Googlebot

Root cause: Conditional logic based on user-agent, cookies, geo, or bot mitigation at CDN/WAF layer.
Fix: Test with Googlebot UA, compare headers, remove conditional redirects, and ensure the same canonicalization applies consistently.

5) Symptom: Pages disappear after you “cleaned up” parameters

Root cause: Redirect rule collapsed meaningful URLs into generic pages, deleting long-tail relevance.
Fix: Only redirect/remove tracking parameters; handle functional filters with canonicals, noindex rules, or allow indexation selectively.

6) Symptom: Redirecting URLs stay in the index for months

Root cause: Temporary redirects (302/307) used for permanent moves, or inconsistent canonical signals.
Fix: Use 301/308 for permanent moves; ensure destination is canonical; ensure internal links and sitemaps point to destination.

7) Symptom: Redirects cause intermittent 5xx and crawling drops

Root cause: Redirect handling at the app layer triggers expensive logic; origin overload; cache misses; TLS handshake overhead on each hop.
Fix: Move redirects to edge/web server where possible; cache redirects; reduce hops; monitor p95 latency on redirect endpoints.

Joke #2: The quickest way to find an undocumented redirect rule is to remove it and wait for someone important to notice.

Checklists / step-by-step plan

Checklist A: You see “Page with redirect” and want to know if you should care

  1. Pick 20 URLs from the report (mix of important and random).
  2. Run curl with redirect counting. If >1 hop on many, care.
  3. Confirm final URLs return 200 and are indexable (noindex/robots/canonical).
  4. Check whether the final URLs are indexed and getting impressions.
  5. If the final URLs are healthy, treat “Page with redirect” as informational.

Checklist B: Redirect cleanup that won’t cause a new incident

  1. Inventory current redirect rules across layers: CDN/WAF, load balancer, web server, app.
  2. Define canonical URL policy: protocol, host, trailing slash, lowercase, locale patterns.
  3. Ensure single-hop to canonical whenever possible.
  4. Update internal links and templates to use canonical URLs.
  5. Regenerate sitemaps to list canonical URLs only.
  6. Deploy with monitoring: redirect rate, 4xx/5xx on destination, latency.
  7. After deploy, log-sample Googlebot and verify it reaches 200 pages.

Checklist C: Migration-specific plan (domains or URL structures)

  1. Create a mapping file (old → new) for all high-value URLs; don’t rely on regex alone.
  2. Implement 301/308 redirects and test for loops and chains.
  3. Keep content parity: titles, headings, structured data where applicable.
  4. Ensure new pages have self-referencing canonicals.
  5. Switch sitemaps to new URLs at launch.
  6. Monitor indexing: new pages should rise as old pages become “Page with redirect.”
  7. Keep redirects long enough (months to years depending on ecosystem), not two weeks because someone wants “clean configs.”

Interesting facts & historical context

  • Fact 1: The HTTP 301 and 302 status codes date back to early HTTP specifications; the web has been moving pages around since basically forever.
  • Fact 2: 307 and 308 were introduced later to clarify method-preserving behavior; they matter more for APIs but show up in modern stacks.
  • Fact 3: Search engines historically treated 302 as “don’t pass signals,” but over time they became more flexible when the redirect persists.
  • Fact 4: HSTS can make HTTP→HTTPS redirects invisible in browser testing because the browser upgrades to HTTPS before making the request.
  • Fact 5: CDNs often implement redirects at the edge; that can be faster, but it can also create hidden rule interactions with origin redirects.
  • Fact 6: Early SEO “canonicalization” was often done with redirects because canonical tags didn’t exist in the beginning; later, canonical hints became a standard tool.
  • Fact 7: Redirect chains became more common as stacks layered: CMS + framework + CDN + WAF + load balancer, each “helping” with normalization.
  • Fact 8: Bots don’t behave like users: they can crawl at scale, retry aggressively, and amplify small inefficiencies into large infrastructure costs.

FAQ

1) Should I try to get rid of “Page with redirect” in Search Console?

Not as a goal. Your goal is that the destination URLs are indexed and performing. Redirected URLs being “not indexed” is expected.
Clean up only when the redirect behavior is inefficient or inconsistent.

2) Is “Page with redirect” a penalty?

No. It’s a classification. The penalty is what you do next—like keeping chains, redirecting to thin pages, or sending mixed canonical signals.

3) How many redirects are too many?

In practice: aim for one hop. Two is survivable. More than that is a reliability smell, and it can slow crawling and waste budget.
If you see 3+, fix it unless there’s a very specific reason.

4) Does a 302 hurt SEO compared to a 301?

Sometimes. If the move is permanent, use 301 or 308. A long-lived 302 can work, but it communicates uncertainty and can delay consolidation.
Don’t build your indexing strategy on “Google probably treats it like a 301 eventually.”

5) Why is my sitemap showing URLs that GSC says are “Page with redirect”?

Because your sitemap generator is using the wrong base URL (HTTP vs HTTPS, wrong host) or it’s outputting legacy paths.
Fix the generator so sitemaps list only canonical, final URLs. That’s one of the easiest wins in this whole topic.

6) What if I need both versions accessible (like filtered pages), but I don’t want them indexed?

Don’t redirect them if they’re functionally needed. Keep them accessible, then use canonical tags or noindex rules deliberately.
Redirects are for “this should not exist as a landing page.”

7) Can “Page with redirect” be caused by JavaScript redirects?

Yes, but that’s the hard mode. JS-based redirects can be slower, less reliable for bots, and can look suspicious if abused.
Prefer server-side redirects unless you have a strong reason.

8) How long should I keep redirects after a migration?

Longer than you think. Months at minimum; often a year or more for significant sites, especially if old URLs are widely linked.
Removing redirects early is how you turn your migration into a permanent 404 harvest.

9) Why are redirected URLs still getting crawled a lot?

Google keeps finding them via internal links, sitemaps, or external links. Internal sources are under your control; fix them first.
External links take time to decay. The goal is to stop feeding the problem.

10) Could “Page with redirect” hide a security or WAF misconfiguration?

Absolutely. WAFs sometimes redirect suspicious traffic, rate-limited traffic, or certain user agents. If Googlebot gets that treatment,
you’ll see indexing instability. Confirm behavior with user-agent tests and edge logs.

Conclusion: practical next steps

“Page with redirect” is not your enemy. It’s a flashlight. Sometimes it’s shining on URLs you intentionally retired. Great.
Sometimes it’s exposing redirect debt: chains, loops, mixed canonicals, and “temporary” redirects that have become permanent out of laziness.

Next steps that pay off fast:

  1. Sample 20–50 URLs from the report and measure hop counts with curl.
  2. Confirm destinations are indexable (200, no noindex, not blocked, self-canonical).
  3. Fix internal links and sitemaps to point to final canonical URLs.
  4. Collapse redirects to one hop and standardize on 301/308 for permanent moves.
  5. Watch logs: Googlebot should spend less time on redirects and more time on real pages.

If you treat redirects like production infrastructure—observable, testable, and boring—you’ll get the best possible SEO outcome:
Google spends its time on your content instead of your plumbing.

← Previous
MariaDB vs SQLite for Write Bursts: Who Handles Spikes Without Drama
Next →
Docker Compose: Environment variables betray you — the .env mistakes that break prod

Leave a comment