Do Backlinks Have to Be Indexed in Google to Count as a Ranking Factor?

Updated: January 2025

Verdict

No. A backlink can pass ranking signals even if the linking page isn't indexed—as long as Google can crawl the page and see a normal, followable <a href> link. Indexing (showing a page in results) and link-signal extraction are separate processes.

Understanding the Key Differences

Crawling

Googlebot fetches a page's HTML and parses links.

Indexing

Google decides whether to store/show that page in search results.

Link Signal

Credit (e.g., PageRank + context/anchor) Google may use for ranking if it crawls the page and encounters an eligible link.

Evidence (Plain-English Summary)

  • Links can contribute signals when Google fetches the HTML; they don't need to appear in the index.
  • Pages blocked by robots.txt aren't fetched → links on them aren't seen → no signals.
  • noindex,follow pages are crawlable and can pass signals, but if left noindexed long-term they're crawled less, so any benefit can fade.
  • rel="nofollow/ugc/sponsored" generally prevents passing ranking credit (treated as hints, but don't bank on equity).
  • The link must be a real, crawlable <a href> (not JS-only or pseudo-links).

Best Practices

Ensure Crawlability

Don't place links on pages blocked by robots.txt or behind hard logins/paywalls.

Use Real HTML Links

<a href="https://yourpage.com">Descriptive anchor</a> rendered in the DOM.

Avoid Credit-Suppressing rel Values

Use nofollow/ugc/sponsored only when disclosure requires it.

Prefer Durable, Discoverable Pages

Articles/resources that are internally linked and recrawled.

Stabilize the Context

Keep the link live, avoid frequent URL changes, and maintain internal links to the linking page if you control it (e.g., partner sites, microsites).

Common Pitfalls

Robots-Blocked Sources

"We placed your link, but the section is disallowed in robots.txt." → No crawl, no value.

Perma-noindex Placements

Temporarily okay; long-term they lose crawl frequency and reliability.

JS-Only or Faux Links

Buttons, data-href, or onclick handlers that never render a proper <a> don't pass equity.

Homepage Dumps

Moving all old links to a generic page (or irrelevant redirects) can behave like soft-404s—minimal consolidation.

Relying on "site:" Checks

Whether a page appears with site: isn't proof of link value; crawlability is the key.

Quick Decision Guide

  1. 1. Is the linking page fetchable by Google (not robots-blocked, not hard-gated)?
    No → Doesn't count.
    Yes → Continue.
  2. 2. Is there a standard <a href> visible to crawlers?
    No → Unreliable/doesn't count.
    Yes → Continue.
  3. 3. Is the link free of nofollow/ugc/sponsored?
    No → Generally won't pass credit.
    Yes → Likely to pass signals—even if the page isn't indexed.

Implementation Checklist

  • ✓ Linking page is not disallowed in robots.txt.
  • ✓ Link is a normal, crawlable <a href>.
  • ✓ Link doesn't use nofollow/ugc/sponsored (unless required).
  • ✓ Linking page is internally linked (not orphaned) and shows up in normal crawl paths.
  • ✓ If noindex is used, it's temporary and there's a plan to remove/move the link.

TL;DR for the Hub

  • Indexed? Not required.
  • Crawlable? Required.

If Google can't fetch the page's HTML, it can't see the link—and it can't help your rankings.