Is Gemini Biased
Toward Google?
A directional analysis of commercial recommendation queries across AI Overviews, AI Mode, and Gemini — and what it means for AI search visibility.
Google's AI products are becoming a bigger part of how people discover, compare, and choose brands.
That creates an important question for marketers, SEO teams, and agencies:
When Google's AI recommends products, does it favor Google-owned products?
To explore that question, Rankability analyzed a small sample of commercial recommendation queries across Google's AI surfaces, including Google AI Overviews, AI Mode, and Gemini.
The goal was not to prove systematic bias. This was not a statistically definitive study. Instead, this was designed as a directional analysis to understand how Google-owned products appear inside Google's AI-generated answers.
We wanted to know:
- How often do Google-owned products appear?
- When they appear, are they recommended as the best option?
- Are they framed positively, neutrally, or negatively?
- Are obvious competitors included or omitted?
- What does this reveal about AI search visibility for brands?
The short answer:
Google-owned products appeared frequently and were often described positively, but Google's AI answers were not blindly pro-Google. In many cases, Google's AI recommended non-Google competitors as the best overall option when the query intent supported it. Tweet
The more important takeaway for SEO and AEO teams is this:
AI visibility is not binary. It is not enough to know whether a brand is mentioned. Brands need to understand where they appear, what category they are assigned to, how they are described, which competitors appear beside them, and whether they are recommended for the right use case. Tweet
Executive Summary
Across the reviewed dataset, Google products appeared often in Google AI answers — but inclusion rarely meant dominance. Six patterns stood out.
Key Findings
Is Gemini biased toward Google? Here's what a directional analysis of commercial recommendation queries revealed.
Methodology
How We Ran the Analysis
Limitation: This was a small directional study, not a statistically definitive analysis. Findings should be read as patterns worth watching, not proof of systematic bias.
Rankability tested commercial recommendation queries across Google's AI products.
The reviewed dataset included examples from three Google AI surfaces:
The analysis focused on queries where Google owns a relevant product or competes directly in the category.
Examples included:
- Smartphones: Pixel, Android
- Browsers: Chrome
- Email: Gmail
- Productivity: Google Workspace, Docs, Sheets, Drive, Meet
- AI chatbots: Gemini
- Search engines: Google Search
- Video conferencing: Google Meet
What we analyzed
Each AI answer was reviewed through the following lenses:
| Lens | Question |
|---|---|
| Google inclusion | Was a Google-owned product mentioned? |
| Recommendation status | Was the Google product actually recommended, or merely referenced? |
| Placement | Was the Google product ranked #1, top 3, category-specific, or lower? |
| Sentiment | Was the framing positive, neutral, mixed, or negative? |
| Query fit | Did the recommendation match the query intent? |
| Competitor context | Which non-Google competitors appeared in the answer? |
| Objectivity | Did the answer include caveats, tradeoffs, or situations where competitors were better? |
Important limitation
This was a small directional study, not a statistically definitive analysis.
The goal was to identify patterns worth watching, not to claim that Google's AI systems are systematically biased across all possible queries.
The results should be interpreted as an early look at how Google AI surfaces recommend brands in commercial categories where Google has a direct product interest.
The Core Finding: Google AI Shows Product Preference, But Not Blindly
A simplistic version of the study might ask:
Does Google AI recommend Google products?
The better question is:
When Google AI recommends Google products, does it recommend them as the best option, a category-specific option, or merely one option among many?
The data showed that Google products were often placed into specific recommendation categories rather than always being pushed as the overall winner. A few representative examples:
| Google Product | Common AI Framing |
|---|---|
| Google Pixel | Best for AI features, still photography, clean Android, value |
| Google Chrome | Best for extensions, compatibility, developer tools, Google ecosystem |
| Gmail | Best for overall usability, free email, Google ecosystem integration |
| Google Workspace | Best for real-time collaboration, shared docs, remote teams |
This suggests that AI-generated recommendations are often category-assignment systems.
The AI does not merely say which brand is best. It decides what each brand is best for.
That has major implications for AI SEO and AEO.
Query Outcome Matrix
How Google-owned products performed across the reviewed sample of commercial recommendation queries.
| Query | Google Product | #1 Overall? | Sentiment | Bias Signal | Objectivity |
|---|---|---|---|---|---|
| Broad recommendation queries | |||||
best web browser | Chrome | Yes | Positive w/ caveats | Strong | Moderate |
best email service | Gmail | Yes | Positive w/ caveats | Strong | Moderate |
best search engine | Google Search | Yes | Positive w/ caveats | Strong | Moderate |
best AI chatbot | Gemini | No | Positive | Low–Mod. | Strong |
best ChatGPT alternative | Gemini | No | Positive | Low–Mod. | Strong |
best spreadsheet software | Google Sheets | No | Positive | Low–Mod. | Strong |
best video conferencing software | Google Meet | No | Positive | Low | Strong |
| Constraint-based queries | |||||
best browser for privacy | Chrome | No | Negative / not rec. | Low | Strong |
best browser for battery life | Chrome | No | Mixed / negative | Low | Strong |
best email provider for privacy | Gmail | No | Negative / cautionary | Low | Strong |
| Direct comparison queries | |||||
Chrome vs Safari | Chrome | No clear winner | Neutral / balanced | Low | Strong |
Google Workspace vs Microsoft 365 | Google Workspace | No clear winner | Neutral to positive | Low | Strong |
Finding 1: Broad Queries Produced the Strongest Google Preference Signals
Where Google Preference Was Strongest and Weakest
Takeaway: Stronger Google preference appeared on broad recommendation queries — but not on constrained or direct comparison queries.
The strongest Google preference signals appeared on broad, general recommendation queries — see the "Stronger Google preference" column above for the full list.
These queries matter because they represent classic commercial discovery behavior. They are also the types of queries brands care about most because they can influence buyers early in the consideration process.
Example: Best web browser
For the query best web browser, Chrome was named the best overall browser in multiple Google AI surfaces.
However, the answer was not entirely promotional. The AI also acknowledged Chrome's drawbacks, including privacy concerns and high RAM usage.
This is an important pattern:
Google products can receive top placement while still being described with caveats.
That means Google AI may favor a Google product in placement while remaining somewhat balanced in sentiment.
Example: Best email service
For best email service, Gmail received strong overall placement for usability, spam filtering, storage, and ecosystem integration.
But when the query shifted to best email provider for privacy, Gmail was not recommended. Instead, Proton Mail, Tuta, and other privacy-focused providers were recommended.
This suggests the recommendation changes dramatically when query intent becomes more specific.
Example: Best search engine
For best search engine, Google was positioned as the best general search engine. This is a strong Google preference signal, but also a defensible one because Google remains the dominant search engine and is widely associated with general search quality, index size, and local results.
However, the AI answers also separated general search from privacy, AI research, and independent search.
Non-Google alternatives received category wins:
| Use Case | Recommended Non-Google Alternatives |
|---|---|
| Privacy | DuckDuckGo, Brave Search, Startpage |
| AI research | Perplexity |
| Independent index | Brave Search |
| Paid/pro search | Kagi |
The lesson is not simply that Google won.
The lesson is that AI answers often create multiple winners based on use case.
Finding 2: Google Products Were Often Positive, But Usually Category-Specific
When Google products appeared, the sentiment was usually positive.
But that positive sentiment was often attached to a narrow use case.
Smartphone queries
Pixel appeared frequently across phone-related queries, but it usually did not win best overall.
| Query | Google Product | Google Placement | Overall Winner / Stronger Competitor |
|---|---|---|---|
| best smartphones | Pixel | Category-specific, AI features | iPhone overall, Samsung power user |
| best Android phones | Pixel | Top 3 / category winner | Samsung overall |
| best phone for photography | Pixel | Best for AI/stills | iPhone overall/video, Xiaomi hardware |
| best phone for battery life | Pixel | Not recommended | Oppo / OnePlus / iPhone |
| best phone for business users | Pixel | Best for AI/software | iPhone security/ecosystem, Samsung multitasking |
| best phone for creators | Pixel | Best for photography/AI editing | iPhone video/socials, Samsung Android all-rounder |
| best smartphone under $800 | Pixel | AI/camera category | Samsung or iPhone overall/value |
| best alternative to iPhone | Pixel | Best overall alternative | Strong Google preference signal |
The most common Pixel associations were:
- AI features
- AI editing
- Computational photography
- Still photography
- Clean Android
- Software simplicity
- Value
- Google Workspace fit
This is not the same as saying Pixel was always recommended as the best phone.
In fact, the opposite was often true. Apple and Samsung frequently received stronger overall placements.
Key smartphone takeaway
Google Pixel appeared frequently and positively, but mostly as a category-specific recommendation rather than the universal best smartphone.
This is important because AI visibility is often about owning a niche inside the answer.
A brand may not win "best overall," but it can still win:
- Best for AI
- Best for creators
- Best for value
- Best for business users
- Best for Android users
- Best for simplicity
That is how brands get visibility in AI-generated recommendation lists.
Finding 3: Constraint-Based Queries Forced More Objectivity
Broad vs Constrained Queries
The more specific the query, the more balanced the AI answer tended to become.
The clearest objectivity signals came from queries with stronger constraints — privacy, battery life, platform-specific use cases, and direct comparisons. The "Constrained & comparison queries" column above lists the representative queries from this group.
This is one of the most important findings in the study: Google AI did not force Google products into answers when the product did not fit the query intent.
Example: Privacy browser queries
For privacy-focused browser queries, Google AI did not recommend Chrome as the top option.
Instead, privacy-focused browsers such as Brave, Tor, Mullvad, LibreWolf, Firefox, and DuckDuckGo were recommended.
In one answer, Chrome was specifically criticized for tracking behavior.
That matters because it shows Google AI can express negative sentiment toward a Google-owned product when the query intent calls for it.
Example: Battery life queries
For phone battery life, Pixel did not appear as a leading recommendation.
For browser battery life, Chrome was not recommended as the top option. Microsoft Edge and Safari received stronger placements depending on operating system.
This suggests that query intent can override product ownership.
Key constraint-query takeaway
The more specific the query, the more objective the AI answer tended to become.
Broad queries created more room for Google products to receive prominent placement. Narrow, constraint-based queries forced the answer to reflect the product's actual strengths and weaknesses.
Finding 4: Direct Comparison Queries Were the Most Balanced
Direct comparison queries produced some of the most objective answers in the reviewed sample.
Examples included:
- iPhone vs Pixel
- Samsung Galaxy vs Google Pixel
- Chrome vs Safari
- Google Workspace vs Microsoft 365
These answers usually did not declare Google's product the universal winner.
Instead, they split the recommendation by user priority.
Example: iPhone vs Pixel
Pixel strengths
- AI features
- Still photography
- Customization
- Value
- Clean Android
iPhone strengths
- Performance
- Video quality
- Ecosystem integration
- Battery efficiency
- Resale value
- Long-term reliability
This is a balanced comparison, not an obvious Google-favorable answer.
Example: Samsung Galaxy vs Google Pixel
Samsung wins for
- Hardware
- Raw performance
- Battery life
- Video quality
- Display
- Build quality
- Power-user features
Pixel wins for
- AI-driven photography
- Software simplicity
- Clean Android
- Value
- Point-and-shoot camera quality
Again, this was balanced.
Example: Google Workspace vs Microsoft 365
Google Workspace wins for
- Cloud-first workflows
- Real-time collaboration
- Simplicity
- Remote teams
- Ease of use
Microsoft 365 wins for
- Desktop applications
- Excel
- Offline work
- Enterprise compliance
- Security controls
- Windows integration
- Finance/legal/power-user workflows
This is a strong example of use-case-based recommendation logic.
Key comparison-query takeaway
When Google AI was asked to compare Google directly against a named competitor, the answers became more balanced and more conditional.
That has a major implication for AEO strategy:
Brands need nuanced comparison content, not one-sided competitor pages.
AI platforms appear to prefer content that explains:
- Who should choose Brand A
- Who should choose Brand B
- Where each product is stronger
- Where each product is weaker
- Which buyer profile each option fits
Finding 5: AI Answers Create Multiple Winners
Traditional SEO tends to focus on a single winner: the page ranking #1.
AI answers work differently.
In many of the reviewed results, the AI answer created multiple category winners.
For example:
| Query Type | Common AI Answer Structure |
|---|---|
| Best smartphones | Best overall, best Android, best camera, best battery, best foldable |
| Best browsers | Best overall, best for privacy, best for speed, best for Mac, best for Windows |
| Best email service | Best overall, best for privacy, best for business, best for Apple users |
| Best AI chatbot | Best overall, best for writing, best for research, best for Google users |
| Best collaboration software | Best for messaging, project management, docs, video, whiteboarding |
This is one of the most important AEO insights from the report.
A brand does not need to be the best overall to win valuable AI visibility.
It needs to become the obvious answer for a specific use case.
That means agencies should help clients own specific AI recommendation slots, such as:
- Best for small businesses
- Best for enterprise teams
- Best for privacy
- Best for speed
- Best for budget
- Best for beginners
- Best for agencies
- Best for local businesses
- Best for regulated industries
- Best for integrations
- Best for AI features
This is how AI visibility becomes more strategic than traditional ranking visibility.
Finding 6: Mentions Alone Are Not Enough
One of the biggest mistakes brands can make with AI visibility tracking is treating mentions as the only metric.
A brand mention can mean very different things.
For example:
| Mention Type | Meaning |
|---|---|
| Positive recommendation | The brand is being recommended as a good choice. |
| Best overall placement | The brand is being positioned as the top option. |
| Category winner | The brand is recommended for a specific use case. |
| Neutral reference | The brand is mentioned without meaningful endorsement. |
| Negative comparison | The brand is mentioned as something weaker or less suitable. |
| Caveated recommendation | The brand is recommended, but with important drawbacks. |
The reviewed dataset showed all of these. Examples:
| Brand/Product | Positive Framing | Caveats / Negative Framing |
|---|---|---|
| Chrome | Extensions, speed, compatibility, DevTools | Privacy concerns, RAM usage, battery drain, tracking |
| Pixel | AI features, photography, clean Android, value | Weaker raw performance, weaker battery life, weaker video vs iPhone/Samsung |
| Gmail | Usability, spam filtering, ecosystem integration | Privacy concerns, targeted ads, inferior to Proton/Tuta for privacy |
| Google Sheets | Collaboration, free cloud use | Weaker than Excel for advanced analysis and power users |
| Gemini | Workspace integration, multimodal tasks | Not best overall vs ChatGPT, Claude, or Perplexity in several use cases |
This is why AI visibility analysis needs more than a binary "mentioned/not mentioned" metric.
It needs to answer:
- Was the brand recommended?
- Was it ranked first?
- Was it ranked in the top three?
- What was it recommended for?
- Was the sentiment positive, mixed, neutral, or negative?
- What caveats were attached?
- Which competitors appeared nearby?
- Which competitors were preferred?
The AI Recommendation Matrix
How AI answers actually position brands — beyond simple inclusion.
This framework turns AI visibility from a vanity metric into a strategic visibility audit.
What This Means for SEO and AEO
The bigger takeaway is not about Google. It is about how AI answer engines position every brand. Five implications follow directly from the patterns above:
- AI visibility is multi-dimensional. Position alone isn't enough — recommendation, category, sentiment, and competitor context all matter (the eight layers of the matrix above).
- Brands need to own a specific use case. Proton owns privacy email, Excel owns advanced analysis, Perplexity owns research with citations — that's the model to copy.
- Comparison content matters more. Objective "who should choose what" pages match how AI answers structure recommendations.
- Negative visibility is real. Brands can appear in an answer and still lose if the framing flags weaknesses — track criticism, not just inclusion.
- Competitor positioning is part of your visibility. AI answers place brands in competitive sets; who appears beside you shapes how you're read.
The practical version of these five implications is the audit framework that follows.
Practical AI Visibility Audit for Agencies
Agencies can use this study as a blueprint for auditing client visibility across AI platforms.
Step 1: Build query sets by intent type
Do not only track broad keywords. Track a mix of:
| Query Type | Example |
|---|---|
| Broad category | best project management software |
| Use case | best project management software for agencies |
| Pain point | best project management software for remote teams |
| Budget | best affordable project management software |
| Comparison | Asana vs ClickUp |
| Alternative | best Asana alternative |
| Segment | best project management software for small businesses |
| Constraint | best privacy-focused project management software |
Tip: Use our free AI Search Query Generator to brainstorm intent-based queries quickly.
Step 2: Track multiple AI surfaces
For each query, track how the brand appears across AI answer engines. Examples:
- Google AI Overviews
- Google AI Mode
- Gemini
- ChatGPT
- Perplexity
- Claude
- Copilot
Different AI platforms may produce different brand visibility patterns.
Step 3: Classify the answer
For each AI answer, score the brand across the eight layers of the AI Recommendation Matrix — inclusion, recommendation, placement, category ownership, sentiment, competitor context, caveats, and omission. That structured classification is the audit deliverable.
Step 4: Identify category gaps
Ask:
- What are competitors being recommended for?
- Which use cases do we not own?
- Where are we mentioned but not recommended?
- Where are we missing entirely?
- Where are we described negatively?
- Which pages or third-party sources may be influencing the answer?
Tip: Run target pages through the AI Search Indexability Checker to confirm AI crawlers can access them in the first place.
Step 5: Build content and authority around the missing recommendation slots
| Visibility Gap | Content Response |
|---|---|
| Competitor owns "best for agencies" | Create agency-specific comparison and use-case content. |
| Brand is missing from "alternatives" queries | Build alternative pages and third-party review presence. |
| Brand is mentioned but not recommended | Improve positioning and proof around the use case. |
| Brand has negative caveats | Create content addressing the weakness directly. |
| Brand is excluded from top lists | Earn mentions on authoritative third-party lists and comparison pages. |
What Brands Should Learn From Google's Own AI Answers
Ironically, Google's own AI answers reveal a playbook for AI visibility.
The brands that appear most effectively are not always the brands that dominate every broad category. They are the brands with clear associations.
AI Recommendation Category Ownership
How AI surfaces assigned brands to specific use-case categories.
AI answers don't just rank brands. They categorize them — assigning each one to a specific use case in the answer.
This is the future of search visibility.
It is not just about ranking.
It is about being mapped to a use case inside the answer.
Final Conclusion
This small study does not prove that Google AI is systematically biased toward Google products.
But it does show a pattern worth watching.
Google-owned products appeared frequently, were usually framed positively, and sometimes received top placement on broad commercial recommendation queries.
At the same time, Google's AI answers often showed meaningful objectivity.
They recommended non-Google competitors when the query intent supported it. They included caveats around Google products. They sometimes criticized Google products directly. And they were especially balanced on direct comparison queries.
The most useful conclusion is not that Google AI is simply biased or unbiased.
The better conclusion is this:
Google AI shows signs of Google-product preference in broad commercial queries, but that preference is usually shaped by query intent. The more specific, constrained, or comparison-driven the query becomes, the more balanced the answer tends to be. Tweet
For SEO teams and agencies, the bigger lesson is clear:
AI search visibility is not binary. The goal is not just to be mentioned. The goal is to be recommended for the right reason, in the right category, with positive framing, against the right competitors. Tweet
That is the new visibility challenge.
And it is exactly what brands need to start measuring.
FAQ
Does this study prove Google AI is biased?
No. This was a small directional study, not a statistically definitive analysis. The findings suggest patterns of Google-product preference in some categories, especially broad commercial queries, but they do not prove systematic bias.
Did Google AI always recommend Google products?
No. Google products were not always recommended. For example, Pixel was not recommended for phone battery life, Chrome was not recommended as the best privacy browser, Gmail was not recommended for private email, and Google Meet did not beat Zoom as the best overall video conferencing tool.
When did Google products perform best?
Google products performed best on broad queries and queries where Google has strong category association. Examples included Chrome for web browsers and developer browsers, Gmail for general email, Google Search for general search, and Pixel as an iPhone alternative.
When did Google AI show the most objectivity?
Google AI showed the most objectivity on privacy queries, battery-life queries, platform-specific queries, and direct comparison queries.
What is the biggest AEO takeaway?
The biggest takeaway is that AI visibility is about more than being mentioned. Brands need to know whether they are recommended, where they are placed, what use case they are assigned to, how they are described, and which competitors appear beside them.
How should agencies use this?
Agencies should build AI visibility audits that track broad queries, use-case queries, comparison queries, alternative queries, and constraint-based queries. Then they should classify visibility by mention, placement, sentiment, category ownership, and competitor context.
Track your brand across AI search
Rankability tracks how brands show up across Google, AI Overviews, AI Mode, Gemini, ChatGPT, Perplexity, Claude, and other AI answer engines.