Episode 5

Does Google Now Approve of AI Slop SEO Content?

November 19, 2025 · 25 minutes

Share this episode

Google, OpenAI, and ChatGPT are redefining how SEO works, and this week's episode covers three of the biggest updates shaping the future of AI search.

In this episode of The AI Search Report, Nathan Gotch explains why Google's new "Opal" content tools are sparking debate about AI-generated content, how to actually rank in ChatGPT's new 5.1 model, and why backlinks alone no longer drive visibility in AI search results.

You will also hear about new features in Google Search Console, including the long-awaited ability to add annotations, and why this simple update is a game changer for tracking SEO performance.

Nathan dives deep into the differences between deterministic and non-deterministic behavior in AI systems, revealing why performance tracking in ChatGPT, Perplexity, and Gemini is nearly impossible to measure perfectly.

Finally, you will learn why traditional SEO tactics like PR link building are losing influence and what to focus on instead if you want your brand to appear in AI retrieval results and AI-generated answers.

You'll Learn

  • How to use the new annotation feature in Google Search Console
  • What changed with ChatGPT 5.1 and how it affects rankings
  • The difference between static training and live retrieval in AI systems
  • Why tracking brand visibility across AI search is so unpredictable
  • What Google's new "Opal" content tools reveal about AI-written content
  • Why "AI slop" content still fails to rank without optimization
  • How Rankability's AI Writer beats generic content generators
  • Why backlinks alone no longer guarantee visibility in AI search
  • The new process for identifying and targeting AI retrieval sources

Key Takeaways

  • ChatGPT 5.1 combines static data with live retrieval to improve accuracy
  • AI search platforms behave unpredictably, making performance tracking complex
  • Google now promotes AI-written content but still punishes low-quality output
  • Backlinks matter less than brand presence across retrieval sources
  • Ranking in traditional search is the foundation for AI visibility
View Full Transcript

00:00

Welcome to episode five of the AI Search Report. Today we will answer: does Google now approve of AI slop content, how do you rank on ChatGPT's new 5.1 model, and why backlinks do not always drive AI visibility. I am your host, Nathan Gotch, and this episode is brought to you by Rankability, AI SEO software that helps you rank everywhere customers are searching.

Let's dive right into it.

Alright, so let us start with some quick news from Google Search Console. You can officially add annotations inside of Google Search Console. This is long overdue and it is going to be incredibly helpful for actually being able to track a lot of performance and, ultimately, a lot of the changes that you are making that could influence your performance inside of Google Search Console.

So it is very simple. You come over here, you right click on any point, and then you click "Add annotation," and then you enter the details for the change that was made.

00:51

So obviously in the case of Rankability, we made a pretty significant change here, which was we began to migrate GotchSEO.com to Rankability.com. That is a very big operation and something that can obviously influence SEO and organic search performance. So we are tracking that very closely, and now we can properly document this through the whole process.

Definitely take advantage of this and make sure when you are making significant changes, you are documenting them so you can go back and analyze how that actually played out.

Okay, now the next thing here is another big one, which is GPT 5.1 is officially here. There was some pretty nasty feedback from GPT 5, so I think OpenAI felt like they had to make some significant changes. This is not a completely brand new model, we are not up to number 6, but this is a variant of 5.

I have been testing it and it actually is quite good, I will admit. So I am going to show you a couple of things real quickly here.

(01:39):
Number one, as I always like to do, I analyze very specific queries at each stage of the sales cycle. So in this case, I am going to use none other than "baseball pitching mechanics," which is the first blog I ever created. It was about baseball pitching, so I understand this topic quite well.

The very first query is something much more informational in nature. Notice that I am specifically using 5.1 and using the "auto" setting. I did not want to use the Instant or the Reasoning models. I wanted to let it decide what was appropriate based on the query.

So in this case, we can see here that this is how we can tell if it is using the static corpus, which is the baked-in training data, or using retrieval. In this case, we can see that it is using its static training. This is pulling directly from its training data. It does not need to go and retrieve any information.

We can see here, based on "pitching mechanics," I already went through this and it is quite accurate. It is going through all the critical points, and notice there are no citations anywhere. There is no retrieval process, no citations.

(03:00):
So basically what happened here is that during the training process, it went through and learned a ton of information about pitching mechanics, baked it into the model, and now it can answer this question quite well.

I would say this is probably 95 percent of the way there as far as being able to throw a baseball properly.

Okay, so that is the first one.

The second example is now we are going a little bit deeper into the funnel. This one is specifically designed to drive the retrieval process. What is interesting about this is that it is an e-commerce based query.

You can see if we click on "Thought," it actually did not retrieve anything, which is fascinating. There are no sources here. Instead, what it did is a direct retrieval based on some sort of relationship that it has with these brands.

We can see New Balance popping up a ton. If we click on Nike, you can go directly to the Nike website, or directly to Academy Sports, or directly to Dick's Sporting Goods.

(03:58):
We knew this was coming with the shopping feature inside of ChatGPT, and you can tell this is a whole different game. If you are playing the e-commerce game, there is not going to be a whole lot of retrieval through search anymore.

If you are in e-commerce, you have to get on the shopping features, otherwise you are going to be in trouble. You want ChatGPT doing that direct retrieval from your e-commerce store and not having to go to search and then find your stuff.

So this is a very different game we are playing here, and you need to pay attention to this.

Another example here is a simple question like, "Who is the best baseball pitcher of all time?"

Once again, it thought for about eight seconds and it did use retrieval in this case. It probably did not feel very confident, so it wanted to go and get some current information, come back, and create the best response possible.

This ended up being a very good response, very accurate and hard to dispute. So it used some of its static corpus and then enhanced it by going out and retrieving more information to make sure that it delivered the most up-to-date information. Because this can change if there is a newer pitcher who is doing particularly well. It wants to make sure it is giving the best information.

I would give this a solid ten out of ten.

(05:29):
Then this next one is more of an action based query where it is going to go and create link bait for a baseball pitching blog. Once again, it produces pretty good ideas. You could actually go through this process in ChatGPT and have it start to build out some of this link bait for you.

So I just wanted to give those examples. So far it is looking pretty solid.

Okay, now here is something I really want to show you today.

This is something that I think a lot of people are having a hard time with when it comes to tracking performance inside of these platforms.

You have to understand this concept of non-deterministic behavior. This is what makes ChatGPT, Perplexity, Claude, and even Google's AI products incredibly different than traditional search.

(06:16):
For example, in traditional search we can run this exact query. We will go here, run this query in Google, and put it side by side.

This is very deterministic behavior, with the exception of the AI product. If we go past the AI section and look specifically at the search results, this is not going to change very much.

We can go and look at the businesses and see Chesterfield Service, Chesterfield Service, AAA Plumbing, AAA Plumbing. Then it changes a little bit in the local pack, but for the most part it is the same.

We go and look at the traditional search results. What do we see? We have Yelp, Better Business Bureau, and Angie. Over here we have Better Business Bureau, Yelp, and Angie.

So once again, it is pretty much the same, whether it is the local pack or the traditional search results.

AI does not function like this though.

(07:00):
We can even see this in Google right now when we look at AI Overviews. We ran the same query and we see that it is slightly different. Not very different, and it seems like this might be cached more than anything, but it is still slightly different compared to when you run the same query twice, even in AI Overviews.

AI Overviews are a little more static than some of the other platforms, a little more deterministic, but not fully.

We can see a different answer here. Pretty much the same, but there are still some slight differences.

This gets even more dramatic when you start to deal with ChatGPT, for example.

We will look at this. I ran the same exact query back to back, literally back to back. The reason I am doing this is to show you that tracking is really hard.

Based on my analysis here, when I ran this there was not a whole lot of overlap just in these two examples. The same exact query produced different results.

(07:50):
In one result we have Chesterfield Service as number one. In the other, we have Trio as number one. The top three are kind of similar, but then there are a bunch of brands in the second result that were not included in the first response.

So let us say you are doing your brand tracking and you run a query in one of the AI tracking tools, even Rankability's, and you see "Okay, we are number one here" or "We are number two, we are good to go."

If you run that same exact query again, you could end up not even being in the response.

So this is the challenge right now with tracking, and I do not think anyone has this fully figured out enough to get within, say, 95 percent accuracy. We are all still trying to figure out the best strategy.

This non-deterministic behavior is one of the biggest problems.

Then we go here and look at the citations. This is where things get really weird too.

(09:00):
On one side, 17 sources were used. On the other, 25 were used. You can see the citations are different. It is going and retrieving different things and picking and choosing what it wants in order to formulate the response. It is different almost every single time.

In fact, the large majority of these citations had no crossover whatsoever. They were unique based on the response. There were maybe around 10 that had similarities, and the others were completely unique.

So once again, if you are trying to figure out "Okay, if I am working with a plumber or I am a plumber myself and I am trying to drive visibility in these platforms, what should I do?"

The best thing you can do is run the same exact query multiple times.

So let us say "Who are the best plumbers in Chesterfield, Missouri?" Run that same query five to ten times in five to ten different chat windows. Then extract all of those citations. From there you can get a good general idea of what the platform's favorite sources are.

(10:28):
Just looking at this briefly, it becomes very clear that Better Business Bureau, Yelp, and Angie are huge sources of retrieval, or at least they are being prioritized in the retrieval process, even if retrieval is happening via search. Regardless, they are huge sources of retrieval in this example.

So you can go after those and ask: what does our Yelp listing look like right now? Do we have good reviews there? Are we actively working to acquire reviews? Have we optimized our profile? How are we ranking on Yelp itself?

How are we doing on Better Business Bureau? Do we have a listing on Angie, and so on.

You can reverse engineer from this point, but you have to do this multiple times to get a broad view of the opportunities.

This is very important when it comes to tracking. You cannot just run it once and think that is reality, because every single time someone searches this, they are going to get a different response.

You add in the unpredictability of the query itself and personalization.

(11:15):
I am currently in an incognito window here. If you throw in personalization, and the fact that queries are likely slightly different, maybe more context, less context, whatever it may be, now you have introduced all kinds of different variables that could change things even more.

This is a pretty controlled environment. So imagine what it looks like in the real world.

Any AI search tracking tool that claims to have this all figured out is, frankly, being a little silly at this point because it is very hard. This is part of the challenge. So just be careful with that one detail.

Like I said, you have to look across many different queries to truly understand what is going on.

Alright, let us move on to the news here from Google releasing Opal in more than 160 countries.

(12:05):
They have been testing this for a while, and it is basically their version of n8n or similar automation based tools.

To be honest, I have not played with it much except for the example I am about to show you.

One of the things that is driving a lot of SEOs crazy is that Google is now pushing AI generated content when, for the last couple of years, they said auto generated content is not necessarily something they approve of.

To be clear, Google has said that they are not against AI content or using AI to produce content. They are against mass production of low quality content, whether it is AI or human written. So there is a distinction there.

But still, I understand why people are annoyed by this.

(12:56):
Here is an interesting example. I used their blog post writer. They have a built in tool where you can use this blog post writer in Opal, and it already has a basic workflow built out.

I tested it and this was the outcome. I ran it through Rankability, and the topic was "best SEO tools for ChatGPT." This is the output from Google.

To be honest, it is pretty wacky. It is not something I would ever put on my website. It is weird, over the top, and not something I would be proud of.

But that is not even the most concerning part. The part that is not a good practice is the fact that this is just generic content that is not optimized for the core keyword at all.

Once again, a lot of people think they can just go to ChatGPT or Claude or one of these platforms and "do SEO." But you cannot do it that way.

(14:00):
Unfortunately, these platforms are not built for SEO. That is not what they are designed for. I have seen some people saying they can just replace SEO tools with ChatGPT, but that is simply not true.

There are a few distinct reasons, especially for the SEO content side.

Number one, the way Rankability works is that we are going out and pulling the top ranking competitors.

When you look at this Opal output, all that Google is doing is going out, doing general research about the topic, and then coming back and creating something. But you know it is not doing it the way it needs to, because we can see based on the overall optimization that it is not pulling from the top ranking competitors in Google.

When we look at Rankability, this is what we are pulling from. We are pulling all of the relevant topics from the top ranking competitors for this query, typically the top 30. We have a bunch of weighting and other logic that happens on the backend, but the point is we are pulling from the top ranking competitors.

By doing that, we are modeling success.

(15:30):
We are not copying them. We are just modeling what they are doing in terms of covering topics. We want to cover similar topics.

That is done using NLP technology. Google is not doing that in Opal.

ChatGPT is not doing that either. It is not going and crawling the top 30 results and then coming back and synthesizing all those topics. That is a very serious process. These platforms just cannot do it by default.

To give you an example, I did the same exact query, "best SEO tools for ChatGPT," but I used Rankability's AI Writer.

You can see the difference. It came out with an 86 Rankability Score, which is a score that measures relevance and makes sure you are covering the most important topics.

The Opal version was at 12, out of the gate, versus 86 out of the gate with Rankability.

This is not something that would go live yet. It still needs editing and refining, but in general, it is already significantly more relevant and designed to rank much better in search engines.

(16:23):
So I wanted to show that distinct difference. These generic tools make people feel like they have a unique advantage, but they actually do not.

The advantage you get is from using actual SEO tools that are designed for this, not from using a tool that just pumps out AI slop. That is not going to help you with SEO.

You still have to understand how to properly structure the content, make sure you are covering the appropriate topics, and follow a strategy. It is not just "generate content and slap it on your site." That has never been a good idea.

Next thing here is Google Chrome.

We know that Perplexity has Comet, and ChatGPT has its own browser as well, a kind of agentic browser.

So of course Google has already come out with their own, and you can go and use it yourself.

(17:09):
I will show you a quick example of how this works.

You can go to a webpage, like Rankability, and click on the Gemini extension in Chrome. You can see I have already done this.

On the current tab, I gave it a simple instruction: "Analyze this home page and look for opportunities to improve user experience."

Gemini is working in the background here, and it is able to see this page, which is fascinating.

I believe the underlying technology is called computer vision. That is what they are using to see what we see. Obviously it does not see it exactly the way we do, but computer vision is likely the technology behind this.

It is quite accurate, because it can actually pull real insights. It is probably also looking at the HTML to understand the structure of the page.

In general, it gave some pretty decent ideas, and I can see how this could become useful, even compared to Comet or ChatGPT's browser.

(17:55):
Once again, Google is the undefeated champion. Unfortunately for competitors, Google just has the resources, the technology, the foundation, and the experience to out-launch them. They are very hard to beat.

I do not know how any of these platforms are going to truly beat them, but I do think they can coexist. At least ChatGPT can coexist with Google.

So once again, something I will be playing with a lot is this kind of agentic browsing behavior, and I will share updates on how that goes.

Alright, next one here is from Reddit, and I wanted to highlight it because it is the last point for today.

This person said they spent seven thousand dollars on PR link building and they are not showing up in the LLMs.

This is very important. If you get anything out of this video or this podcast episode, this is it.

(19:00):
Traditional SEO practices do not always apply to driving brand visibility in AI platforms.

This is an example. Someone just thinks, "I am going to go buy links and that will help me get better brand visibility in ChatGPT." That is not how it works.

I am going to show you why.

Here is the traditional way of doing things. You get backlinks, you drive them to your domain. That could be pages, blog posts, whatever. You grow your authority by doing that. You grow your site authority.

That is one of the most powerful ranking factors: having a strong website from a link profile perspective. As a result, that helps you rank in traditional search engines.

That is the basis of why we build links. Then, from there, if you rank in search, you get traffic.

Maybe not as much traffic anymore because AI takes a lot of the clicks away, but prior to AI taking over, that is how it worked.

(20:25):
Now this is where things get different, because that is the traditional SEO process.

What we are looking at now is that when we introduce AI, we are ranking in search to increase our odds of being a source of retrieval in the AI platforms.

All the work you are doing to build your site authority is still useful to rank in traditional search. But it does not mean you will automatically be used as a source of retrieval. It just means you are increasing your odds.

We have found that, when retrieval is used, it can sometimes pull from the number one position, and sometimes from a page that is number 40. There is not a clear, simple pattern behind this.

We have seen that around 90 percent of the citations that are pulled are usually from websites ranking somewhere in the top 40, either on Google or Bing, depending on the platform.

So you have to rewire your mind. We are ranking in the traditional search indexes because that is the foundational layer for retrieval.

(21:30):
If you only focus on ranking your own domain, you are going to be in trouble, because retrieval is never going to rely on just one source.

What you want to think about is: how can we occupy as much SERP real estate in traditional search as possible, so we increase our odds of being used in retrieval, with the intent to influence the AI response.

From there, we have citations. These citations are simply what was used to help formulate the answer through retrieval. And then finally we have the AI answer.

The way people like this Reddit user think, when they buy a lot of links, is that they will be able to influence the AI directly. That is not how it works.

You will be able to make your domain more prominent as a possible source of retrieval if your page is ranking for the queries that are used in these AI platforms.

(22:30):
So if someone is searching "what are the best SEO tools for 2026" and you have a page that is ranking for "best SEO tools for 2026" in traditional search, that page can be used as a source of retrieval, which allows you to influence the AI answer.

But if you are not ranking for anything relevant, you are not going to be used for retrieval at all.

This is where it gets tricky.

The best thing to do now is to think less about "just build links" and more about doing what I showed earlier, where you run your queries across AI platforms and get a general idea of the types of websites that are being used in retrieval.

From there, those are your prospects. You should be spending all of your time and effort on those, because the AI platforms are literally telling you what they are using for retrieval.

You can skip some of the guesswork and go straight to the AI citations. See what the core platforms are and invest your time and effort there.

Please do not just buy links to buy links. That is not going to work for AI visibility.

(23:25):
The way these AI answers are formulated is through consensus.

If "Blue Shoes Inc." keeps showing up across all of these sources of retrieval, Blue Shoes Inc. is going to be the recommended brand. It is really not that complicated.

Once you see it, it is like when a magician reveals their tricks. You realize it is not that complicated.

It is a matter of consistently appearing in these sources of retrieval. Your brand has to show up.

So what you want to do is extract the citations from the AI answers you care about, then look at them and ask: "Is our brand here? Are we present?"

Make a priority list of the sites where your brand is not present and then attack those. Get your brand there.

It is that simple. Not easy, but simple as far as the process.

So I hope that helps.

And of course, as always, if you like this podcast, please drop a comment, like it, and please subscribe. Thank you.

Ready to Master AI Search?

Join Rankability Academy and learn how to rank in AI search engines like ChatGPT, Perplexity, and Google AI Overviews.

Explore the Academy