Data Report · 2026

Is Gemini Biased
Toward Google?

A directional analysis of commercial recommendation queries across AI Overviews, AI Mode, and Gemini — and what it means for AI search visibility.

AI Overviews AI Mode Gemini AEO Brand Visibility
Rankability · rankability.com · April 2026
Broad query · AI Overview
Query: best web browser
1Chrome
Google
2Firefox
3Safari
Constrained query · AI Overview
Query: best browser for privacy
1Brave
Winner
2Firefox
3Chrome
Not rec.
Broad queries
Strong signal
Constrained queries
High objectivity

Google's AI products are becoming a bigger part of how people discover, compare, and choose brands.

That creates an important question for marketers, SEO teams, and agencies:

When Google's AI recommends products, does it favor Google-owned products?

To explore that question, Rankability analyzed a small sample of commercial recommendation queries across Google's AI surfaces, including Google AI Overviews, AI Mode, and Gemini.

The goal was not to prove systematic bias. This was not a statistically definitive study. Instead, this was designed as a directional analysis to understand how Google-owned products appear inside Google's AI-generated answers.

We wanted to know:

  1. How often do Google-owned products appear?
  2. When they appear, are they recommended as the best option?
  3. Are they framed positively, neutrally, or negatively?
  4. Are obvious competitors included or omitted?
  5. What does this reveal about AI search visibility for brands?

The short answer:

Google-owned products appeared frequently and were often described positively, but Google's AI answers were not blindly pro-Google. In many cases, Google's AI recommended non-Google competitors as the best overall option when the query intent supported it. Tweet

The more important takeaway for SEO and AEO teams is this:

AI visibility is not binary. It is not enough to know whether a brand is mentioned. Brands need to understand where they appear, what category they are assigned to, how they are described, which competitors appear beside them, and whether they are recommended for the right use case. Tweet

Executive Summary

Across the reviewed dataset, Google products appeared often in Google AI answers — but inclusion rarely meant dominance. Six patterns stood out.

Report Summary Rankability

Key Findings

Is Gemini biased toward Google? Here's what a directional analysis of commercial recommendation queries revealed.

01
Google products appeared frequently
Google-owned products were detected across nearly every reviewed category — browsers, email, productivity, AI chatbots, and search — wherever Google owns a relevant product.
High inclusion rate
02
Broad queries showed the strongest Google preference
Open-ended queries like "best web browser," "best email service," and "best search engine" consistently placed Google products at or near the top with positive framing.
Strong preference signal
03
Direct comparison queries were more balanced
Queries like "Chrome vs Safari" and "Google Workspace vs Microsoft 365" split recommendations by use case rather than declaring Google the winner. Balance improved significantly.
Higher objectivity
04
Privacy and battery queries showed more objectivity
When queries included constraint signals like privacy or battery life, Google products were often ranked lower or explicitly flagged as not recommended — Chrome was not the top privacy browser.
Constraint queries = balanced
05
Google products were often category winners, not always overall winners
AI answers frequently assigned Google products to specific use-case categories — developer tools, ecosystem integration, real-time collaboration — without naming them the universal best option.
Category ownership pattern
06
AI visibility is about positioning, not just mentions
The deeper finding: AI answers don't just mention brands. They position them — assigning sentiment, placement, category ownership, competitor context, and caveats that shape how brands are perceived.
Core AEO insight

Methodology

Study Design

How We Ran the Analysis

Rankability
Surface 01
Google AI Overviews
Surface 02
Google AI Mode
Surface 03
Google Gemini
1
Selected commercial recommendation queries
Focused on categories where Google owns a relevant product — browsers, email, productivity, AI chatbots, smartphones, video conferencing, and search engines.
Browsers Email Productivity AI chatbots Search
2
Ran queries across Google AI Overviews, AI Mode, and Gemini
Each query was tested across all three Google AI surfaces to observe how recommendation behavior varied by platform and query type.
3
Identified whether Google-owned products appeared
Recorded inclusion, placement (rank position), and whether Google products were named the best, a top option, or merely referenced.
Inclusion Placement Recommendation status
4
Classified recommendation strength, placement, and sentiment
Each AI answer was reviewed for whether Google products were framed positively, neutrally, with caveats, or negatively — and whether they owned a specific category.
Positive Neutral Mixed Negative
5
Compared Google products against named competitors
Tracked which non-Google competitors appeared in the same answer, how they were positioned, and whether any obvious alternatives were omitted.
6
Synthesized patterns across query types
Looked for recurring patterns across broad, constrained, and direct comparison queries to identify where Google preference was strongest and weakest.

Limitation: This was a small directional study, not a statistically definitive analysis. Findings should be read as patterns worth watching, not proof of systematic bias.

Rankability tested commercial recommendation queries across Google's AI products.

The reviewed dataset included examples from three Google AI surfaces:

The analysis focused on queries where Google owns a relevant product or competes directly in the category.

Examples included:

  • Smartphones: Pixel, Android
  • Browsers: Chrome
  • Email: Gmail
  • Productivity: Google Workspace, Docs, Sheets, Drive, Meet
  • AI chatbots: Gemini
  • Search engines: Google Search
  • Video conferencing: Google Meet

What we analyzed

Each AI answer was reviewed through the following lenses:

Lens Question
Google inclusionWas a Google-owned product mentioned?
Recommendation statusWas the Google product actually recommended, or merely referenced?
PlacementWas the Google product ranked #1, top 3, category-specific, or lower?
SentimentWas the framing positive, neutral, mixed, or negative?
Query fitDid the recommendation match the query intent?
Competitor contextWhich non-Google competitors appeared in the answer?
ObjectivityDid the answer include caveats, tradeoffs, or situations where competitors were better?

Important limitation

This was a small directional study, not a statistically definitive analysis.

The goal was to identify patterns worth watching, not to claim that Google's AI systems are systematically biased across all possible queries.

The results should be interpreted as an early look at how Google AI surfaces recommend brands in commercial categories where Google has a direct product interest.


"
Google AI shows product preference — but it isn't blindly pro-Google.
Core finding · Rankability directional study Rankability

The Core Finding: Google AI Shows Product Preference, But Not Blindly

A simplistic version of the study might ask:

Does Google AI recommend Google products?

The better question is:

When Google AI recommends Google products, does it recommend them as the best option, a category-specific option, or merely one option among many?

The data showed that Google products were often placed into specific recommendation categories rather than always being pushed as the overall winner. A few representative examples:

Google Product Common AI Framing
Google PixelBest for AI features, still photography, clean Android, value
Google ChromeBest for extensions, compatibility, developer tools, Google ecosystem
GmailBest for overall usability, free email, Google ecosystem integration
Google WorkspaceBest for real-time collaboration, shared docs, remote teams

This suggests that AI-generated recommendations are often category-assignment systems.

The AI does not merely say which brand is best. It decides what each brand is best for.

That has major implications for AI SEO and AEO.


Result Set

Query Outcome Matrix

How Google-owned products performed across the reviewed sample of commercial recommendation queries.

Rankability
Query Google Product #1 Overall? Sentiment Bias Signal Objectivity
Broad recommendation queries
best web browser Chrome Yes Positive w/ caveats Strong Moderate
best email service Gmail Yes Positive w/ caveats Strong Moderate
best search engine Google Search Yes Positive w/ caveats Strong Moderate
best AI chatbot Gemini No Positive Low–Mod. Strong
best ChatGPT alternative Gemini No Positive Low–Mod. Strong
best spreadsheet software Google Sheets No Positive Low–Mod. Strong
best video conferencing software Google Meet No Positive Low Strong
Constraint-based queries
best browser for privacy Chrome No Negative / not rec. Low Strong
best browser for battery life Chrome No Mixed / negative Low Strong
best email provider for privacy Gmail No Negative / cautionary Low Strong
Direct comparison queries
Chrome vs Safari Chrome No clear winner Neutral / balanced Low Strong
Google Workspace vs Microsoft 365 Google Workspace No clear winner Neutral to positive Low Strong

Finding 1: Broad Queries Produced the Strongest Google Preference Signals

Preference Analysis

Where Google Preference Was Strongest and Weakest

Rankability
Stronger Google preference
01
best web browserStrong
02
best email serviceStrong
03
best search engineStrong
04
best browser for developersStrong
05
best alternative to iPhoneStrong
Pattern observed
Open-ended broad queries gave AI more latitude — Google products surfaced prominently with positive framing and minimal caveats.
Lower preference / higher objectivity
01
best browser for privacyLow signal
02
best browser for battery lifeLow signal
03
best email provider for privacyLow signal
04
best AI chatbotLow–Mod.
05
best ChatGPT alternativeLow–Mod.
06
best spreadsheet softwareLow–Mod.
07
Chrome vs SafariLow signal
08
Google Workspace vs M365Low signal
Pattern observed
Constraint-based and direct comparison queries surfaced non-Google leaders, with Google products often flagged or split by use case.

Takeaway: Stronger Google preference appeared on broad recommendation queries — but not on constrained or direct comparison queries.

The strongest Google preference signals appeared on broad, general recommendation queries — see the "Stronger Google preference" column above for the full list.

These queries matter because they represent classic commercial discovery behavior. They are also the types of queries brands care about most because they can influence buyers early in the consideration process.

Example: Best web browser

For the query best web browser, Chrome was named the best overall browser in multiple Google AI surfaces.

However, the answer was not entirely promotional. The AI also acknowledged Chrome's drawbacks, including privacy concerns and high RAM usage.

This is an important pattern:

Google products can receive top placement while still being described with caveats.

That means Google AI may favor a Google product in placement while remaining somewhat balanced in sentiment.

Example: Best email service

For best email service, Gmail received strong overall placement for usability, spam filtering, storage, and ecosystem integration.

But when the query shifted to best email provider for privacy, Gmail was not recommended. Instead, Proton Mail, Tuta, and other privacy-focused providers were recommended.

This suggests the recommendation changes dramatically when query intent becomes more specific.

Example: Best search engine

For best search engine, Google was positioned as the best general search engine. This is a strong Google preference signal, but also a defensible one because Google remains the dominant search engine and is widely associated with general search quality, index size, and local results.

However, the AI answers also separated general search from privacy, AI research, and independent search.

Non-Google alternatives received category wins:

Use Case Recommended Non-Google Alternatives
PrivacyDuckDuckGo, Brave Search, Startpage
AI researchPerplexity
Independent indexBrave Search
Paid/pro searchKagi

The lesson is not simply that Google won.

The lesson is that AI answers often create multiple winners based on use case.


Finding 2: Google Products Were Often Positive, But Usually Category-Specific

When Google products appeared, the sentiment was usually positive.

But that positive sentiment was often attached to a narrow use case.

Smartphone queries

Pixel appeared frequently across phone-related queries, but it usually did not win best overall.

Query Google Product Google Placement Overall Winner / Stronger Competitor
best smartphonesPixelCategory-specific, AI featuresiPhone overall, Samsung power user
best Android phonesPixelTop 3 / category winnerSamsung overall
best phone for photographyPixelBest for AI/stillsiPhone overall/video, Xiaomi hardware
best phone for battery lifePixelNot recommendedOppo / OnePlus / iPhone
best phone for business usersPixelBest for AI/softwareiPhone security/ecosystem, Samsung multitasking
best phone for creatorsPixelBest for photography/AI editingiPhone video/socials, Samsung Android all-rounder
best smartphone under $800PixelAI/camera categorySamsung or iPhone overall/value
best alternative to iPhonePixelBest overall alternativeStrong Google preference signal

The most common Pixel associations were:

  • AI features
  • AI editing
  • Computational photography
  • Still photography
  • Clean Android
  • Software simplicity
  • Value
  • Google Workspace fit

This is not the same as saying Pixel was always recommended as the best phone.

In fact, the opposite was often true. Apple and Samsung frequently received stronger overall placements.

Key smartphone takeaway

Google Pixel appeared frequently and positively, but mostly as a category-specific recommendation rather than the universal best smartphone.

This is important because AI visibility is often about owning a niche inside the answer.

A brand may not win "best overall," but it can still win:

  • Best for AI
  • Best for creators
  • Best for value
  • Best for business users
  • Best for Android users
  • Best for simplicity

That is how brands get visibility in AI-generated recommendation lists.


Finding 3: Constraint-Based Queries Forced More Objectivity

Query Behavior

Broad vs Constrained Queries

Rankability
Broad commercial queries Higher Google preference
best web browser
best email service
best search engine
best alternative to iPhone
Behavior
Open-ended queries gave AI more latitude to default to category leaders — Google products surfaced prominently with positive framing.
Constrained & comparison queries More balanced
best browser for privacy
best browser for battery life
best email provider for privacy
Chrome vs Safari
Google Workspace vs Microsoft 365
Behavior
Constraint signals (privacy, battery, comparison) forced AI to surface non-Google leaders or split recommendations by use case.

The more specific the query, the more balanced the AI answer tended to become.

The clearest objectivity signals came from queries with stronger constraints — privacy, battery life, platform-specific use cases, and direct comparisons. The "Constrained & comparison queries" column above lists the representative queries from this group.

This is one of the most important findings in the study: Google AI did not force Google products into answers when the product did not fit the query intent.

Example: Privacy browser queries

For privacy-focused browser queries, Google AI did not recommend Chrome as the top option.

Instead, privacy-focused browsers such as Brave, Tor, Mullvad, LibreWolf, Firefox, and DuckDuckGo were recommended.

In one answer, Chrome was specifically criticized for tracking behavior.

That matters because it shows Google AI can express negative sentiment toward a Google-owned product when the query intent calls for it.

Example: Battery life queries

For phone battery life, Pixel did not appear as a leading recommendation.

For browser battery life, Chrome was not recommended as the top option. Microsoft Edge and Safari received stronger placements depending on operating system.

This suggests that query intent can override product ownership.

Key constraint-query takeaway

The more specific the query, the more objective the AI answer tended to become.

Broad queries created more room for Google products to receive prominent placement. Narrow, constraint-based queries forced the answer to reflect the product's actual strengths and weaknesses.


Finding 4: Direct Comparison Queries Were the Most Balanced

Direct comparison queries produced some of the most objective answers in the reviewed sample.

Examples included:

  • iPhone vs Pixel
  • Samsung Galaxy vs Google Pixel
  • Chrome vs Safari
  • Google Workspace vs Microsoft 365

These answers usually did not declare Google's product the universal winner.

Instead, they split the recommendation by user priority.

Example: iPhone vs Pixel

Pixel strengths

  • AI features
  • Still photography
  • Customization
  • Value
  • Clean Android

iPhone strengths

  • Performance
  • Video quality
  • Ecosystem integration
  • Battery efficiency
  • Resale value
  • Long-term reliability

This is a balanced comparison, not an obvious Google-favorable answer.

Example: Samsung Galaxy vs Google Pixel

Samsung wins for

  • Hardware
  • Raw performance
  • Battery life
  • Video quality
  • Display
  • Build quality
  • Power-user features

Pixel wins for

  • AI-driven photography
  • Software simplicity
  • Clean Android
  • Value
  • Point-and-shoot camera quality

Again, this was balanced.

Example: Google Workspace vs Microsoft 365

Google Workspace wins for

  • Cloud-first workflows
  • Real-time collaboration
  • Simplicity
  • Remote teams
  • Ease of use

Microsoft 365 wins for

  • Desktop applications
  • Excel
  • Offline work
  • Enterprise compliance
  • Security controls
  • Windows integration
  • Finance/legal/power-user workflows

This is a strong example of use-case-based recommendation logic.

Key comparison-query takeaway

When Google AI was asked to compare Google directly against a named competitor, the answers became more balanced and more conditional.

That has a major implication for AEO strategy:

Brands need nuanced comparison content, not one-sided competitor pages.

AI platforms appear to prefer content that explains:

  • Who should choose Brand A
  • Who should choose Brand B
  • Where each product is stronger
  • Where each product is weaker
  • Which buyer profile each option fits

"
AI answers don't crown a single winner — they crown one per use case.
Finding 5 · Multiple category winners Rankability

Finding 5: AI Answers Create Multiple Winners

Traditional SEO tends to focus on a single winner: the page ranking #1.

AI answers work differently.

In many of the reviewed results, the AI answer created multiple category winners.

For example:

Query Type Common AI Answer Structure
Best smartphonesBest overall, best Android, best camera, best battery, best foldable
Best browsersBest overall, best for privacy, best for speed, best for Mac, best for Windows
Best email serviceBest overall, best for privacy, best for business, best for Apple users
Best AI chatbotBest overall, best for writing, best for research, best for Google users
Best collaboration softwareBest for messaging, project management, docs, video, whiteboarding

This is one of the most important AEO insights from the report.

A brand does not need to be the best overall to win valuable AI visibility.

It needs to become the obvious answer for a specific use case.

That means agencies should help clients own specific AI recommendation slots, such as:

  • Best for small businesses
  • Best for enterprise teams
  • Best for privacy
  • Best for speed
  • Best for budget
  • Best for beginners
  • Best for agencies
  • Best for local businesses
  • Best for regulated industries
  • Best for integrations
  • Best for AI features

This is how AI visibility becomes more strategic than traditional ranking visibility.


Finding 6: Mentions Alone Are Not Enough

One of the biggest mistakes brands can make with AI visibility tracking is treating mentions as the only metric.

A brand mention can mean very different things.

For example:

Mention Type Meaning
Positive recommendationThe brand is being recommended as a good choice.
Best overall placementThe brand is being positioned as the top option.
Category winnerThe brand is recommended for a specific use case.
Neutral referenceThe brand is mentioned without meaningful endorsement.
Negative comparisonThe brand is mentioned as something weaker or less suitable.
Caveated recommendationThe brand is recommended, but with important drawbacks.

The reviewed dataset showed all of these. Examples:

Brand/Product Positive Framing Caveats / Negative Framing
ChromeExtensions, speed, compatibility, DevToolsPrivacy concerns, RAM usage, battery drain, tracking
PixelAI features, photography, clean Android, valueWeaker raw performance, weaker battery life, weaker video vs iPhone/Samsung
GmailUsability, spam filtering, ecosystem integrationPrivacy concerns, targeted ads, inferior to Proton/Tuta for privacy
Google SheetsCollaboration, free cloud useWeaker than Excel for advanced analysis and power users
GeminiWorkspace integration, multimodal tasksNot best overall vs ChatGPT, Claude, or Perplexity in several use cases

This is why AI visibility analysis needs more than a binary "mentioned/not mentioned" metric.

It needs to answer:

  • Was the brand recommended?
  • Was it ranked first?
  • Was it ranked in the top three?
  • What was it recommended for?
  • Was the sentiment positive, mixed, neutral, or negative?
  • What caveats were attached?
  • Which competitors appeared nearby?
  • Which competitors were preferred?

Framework · 8 Layers

The AI Recommendation Matrix

How AI answers actually position brands — beyond simple inclusion.

Rankability
01
Inclusion
Was the brand mentioned at all in the AI answer?
Was it there?
02
Recommendation
Was it actively recommended, or merely referenced as one of many options?
Was it endorsed?
03
Placement
Was it ranked #1, top 3, or buried lower in the recommendation list?
Where did it rank?
04
Category Ownership
What specific use case or category did it win — even if not the overall #1?
What did it own?
05
Sentiment
Was the framing positive, neutral, mixed, or negative?
How was it framed?
06
Competitor Context
Which competitors appeared alongside it, and how were they positioned?
Who was beside it?
07
Caveats
What weaknesses, warnings, or qualifications were attached to the brand?
What flagged it?
08
Omission
Were obvious competitors missing from the answer entirely?
Who was absent?
AI visibility is not binary.
Brands are positioned across all 8 layers — every one shapes how the AI answer reads.

This framework turns AI visibility from a vanity metric into a strategic visibility audit.


"
A vanity mention isn't visibility. Where you appear, how you're framed, and who appears beside you is.
The AI Recommendation Matrix Rankability

What This Means for SEO and AEO

The bigger takeaway is not about Google. It is about how AI answer engines position every brand. Five implications follow directly from the patterns above:

  • AI visibility is multi-dimensional. Position alone isn't enough — recommendation, category, sentiment, and competitor context all matter (the eight layers of the matrix above).
  • Brands need to own a specific use case. Proton owns privacy email, Excel owns advanced analysis, Perplexity owns research with citations — that's the model to copy.
  • Comparison content matters more. Objective "who should choose what" pages match how AI answers structure recommendations.
  • Negative visibility is real. Brands can appear in an answer and still lose if the framing flags weaknesses — track criticism, not just inclusion.
  • Competitor positioning is part of your visibility. AI answers place brands in competitive sets; who appears beside you shapes how you're read.

The practical version of these five implications is the audit framework that follows.


Practical AI Visibility Audit for Agencies

Agencies can use this study as a blueprint for auditing client visibility across AI platforms.

Step 1: Build query sets by intent type

Do not only track broad keywords. Track a mix of:

Query Type Example
Broad categorybest project management software
Use casebest project management software for agencies
Pain pointbest project management software for remote teams
Budgetbest affordable project management software
ComparisonAsana vs ClickUp
Alternativebest Asana alternative
Segmentbest project management software for small businesses
Constraintbest privacy-focused project management software

Tip: Use our free AI Search Query Generator to brainstorm intent-based queries quickly.

Step 2: Track multiple AI surfaces

For each query, track how the brand appears across AI answer engines. Examples:

  • Google AI Overviews
  • Google AI Mode
  • Gemini
  • ChatGPT
  • Perplexity
  • Claude
  • Copilot

Different AI platforms may produce different brand visibility patterns.

Step 3: Classify the answer

For each AI answer, score the brand across the eight layers of the AI Recommendation Matrix — inclusion, recommendation, placement, category ownership, sentiment, competitor context, caveats, and omission. That structured classification is the audit deliverable.

Step 4: Identify category gaps

Ask:

  • What are competitors being recommended for?
  • Which use cases do we not own?
  • Where are we mentioned but not recommended?
  • Where are we missing entirely?
  • Where are we described negatively?
  • Which pages or third-party sources may be influencing the answer?

Tip: Run target pages through the AI Search Indexability Checker to confirm AI crawlers can access them in the first place.

Step 5: Build content and authority around the missing recommendation slots

Visibility Gap Content Response
Competitor owns "best for agencies"Create agency-specific comparison and use-case content.
Brand is missing from "alternatives" queriesBuild alternative pages and third-party review presence.
Brand is mentioned but not recommendedImprove positioning and proof around the use case.
Brand has negative caveatsCreate content addressing the weakness directly.
Brand is excluded from top listsEarn mentions on authoritative third-party lists and comparison pages.

"
The brands that win in AI answers aren't always the biggest. They're the ones with the clearest associations.
What brands should learn from Google's own AI Rankability

What Brands Should Learn From Google's Own AI Answers

Ironically, Google's own AI answers reveal a playbook for AI visibility.

The brands that appear most effectively are not always the brands that dominate every broad category. They are the brands with clear associations.

Category Mapping

AI Recommendation Category Ownership

How AI surfaces assigned brands to specific use-case categories.

Rankability
Google-owned products
G
Google Pixel
AI photography Clean Android Value
G
Chrome
Extensions Developer tools Compatibility
G
Gmail
Usability Free Ecosystem
G
Google Workspace
Real-time collab Cloud workflows
G
Google Meet
Simplicity Browser access
G
Gemini
Workspace workflows Multimodal
G
Google Search
General search Local results
Competitor category winners
C
ChatGPT
Best all-around AI
C
Claude
Writing Reasoning
P
Perplexity
Research Citations
Z
Zoom
Best overall video
M
Microsoft Excel
Advanced analysis
P
Proton Mail
Privacy email
S
Safari
Mac battery efficiency
E
Microsoft Edge
Windows integration

AI answers don't just rank brands. They categorize them — assigning each one to a specific use case in the answer.

This is the future of search visibility.

It is not just about ranking.

It is about being mapped to a use case inside the answer.


Final Conclusion

This small study does not prove that Google AI is systematically biased toward Google products.

But it does show a pattern worth watching.

Google-owned products appeared frequently, were usually framed positively, and sometimes received top placement on broad commercial recommendation queries.

At the same time, Google's AI answers often showed meaningful objectivity.

They recommended non-Google competitors when the query intent supported it. They included caveats around Google products. They sometimes criticized Google products directly. And they were especially balanced on direct comparison queries.

The most useful conclusion is not that Google AI is simply biased or unbiased.

The better conclusion is this:

Google AI shows signs of Google-product preference in broad commercial queries, but that preference is usually shaped by query intent. The more specific, constrained, or comparison-driven the query becomes, the more balanced the answer tends to be. Tweet

For SEO teams and agencies, the bigger lesson is clear:

AI search visibility is not binary. The goal is not just to be mentioned. The goal is to be recommended for the right reason, in the right category, with positive framing, against the right competitors. Tweet

That is the new visibility challenge.

And it is exactly what brands need to start measuring.


FAQ

Does this study prove Google AI is biased?

No. This was a small directional study, not a statistically definitive analysis. The findings suggest patterns of Google-product preference in some categories, especially broad commercial queries, but they do not prove systematic bias.

Did Google AI always recommend Google products?

No. Google products were not always recommended. For example, Pixel was not recommended for phone battery life, Chrome was not recommended as the best privacy browser, Gmail was not recommended for private email, and Google Meet did not beat Zoom as the best overall video conferencing tool.

When did Google products perform best?

Google products performed best on broad queries and queries where Google has strong category association. Examples included Chrome for web browsers and developer browsers, Gmail for general email, Google Search for general search, and Pixel as an iPhone alternative.

When did Google AI show the most objectivity?

Google AI showed the most objectivity on privacy queries, battery-life queries, platform-specific queries, and direct comparison queries.

What is the biggest AEO takeaway?

The biggest takeaway is that AI visibility is about more than being mentioned. Brands need to know whether they are recommended, where they are placed, what use case they are assigned to, how they are described, and which competitors appear beside them.

How should agencies use this?

Agencies should build AI visibility audits that track broad queries, use-case queries, comparison queries, alternative queries, and constraint-based queries. Then they should classify visibility by mention, placement, sentiment, category ownership, and competitor context.

Share this study

Found this useful? Send it to your team.

If this changed how you think about AI search visibility, pass it on. Marketing leads, agency owners, and in-house SEO teams all need to see it.

Tip: highlight any sentence in the article to share it as a quote.

Track your brand across AI search

Rankability tracks how brands show up across Google, AI Overviews, AI Mode, Gemini, ChatGPT, Perplexity, Claude, and other AI answer engines.