The Internet Marketing Driver

  • GSQi Home
  • About Glenn Gabe
  • SEO Services
    • Algorithm Update Recovery
    • Technical SEO Audits
    • Website Redesigns and Site Migrations
    • SEO Training
  • Blog
    • Web Stories
  • Contact GSQi

How To Find Lower-Quality Content Being Excluded From Indexing Using Bing’s XML Sitemap Coverage Report (and Its “Content Quality” Flag)

September 25, 2023 By Glenn Gabe Leave a Comment

Finding lower-quality content via Bing's Sitemap Index Coverage Report in Bing Webmaster Tools


Bing finally rolled out its XML Sitemap Coverage Report in Bing Webmaster Tools, which is a great addition for site owners. Using the report, you can check indexing levels based on the urls being submitted via XML sitemaps. This is similar to what Google offers in its Coverage reporting, but it’s great to have another major search engine provide this data.

Hello “Content Quality” flag:
When I first dug into the reporting, I quickly started checking urls excluded from indexing across sites. Like Google, Bing provides a number of categories for urls being excluded,  including noindexed, redirected, 404s, and more. But one of those categories really struck me – “Content Quality”. With “Quality” being the most important thing that site owners should focus on, understanding when a major search engine believes you have quality problems, and surfacing those specific urls, is pretty awesome.

Bing's "Content Quality" flag in the sitemap index coverage report


And once you click the “Content quality” category, you can view all of the urls from that sitemap that were flagged as having content quality issues:

Viewing urls flagged with "content quality" issues in Bing Webmaster Tools


Bing is not Google, but Bing is a major search engine: And will Google follow?
With major algorithm updates evaluating quality on several levels, having this information from Bing could potentially help site owners surface and improve lower-quality content. And with Google’s broad core updates, reviews updates, and now helpful content updates, digging into urls flagged as lower quality could help jumpstart a site owner’s analysis. Sure, Bing is not Google, but the content that Bing is surfacing in its Sitemap Index Coverage reporting could be a proxy for what Google also believes is lower-quality content. You don’t want to take that at face value, but it’s definitely worth investigating…

And maybe a bigger question is… will Google follow Bing here and provide a “Content Quality” category in its own Coverage reporting? I know Google has toyed with this idea in the past, but never officially rolled out a content quality category in Search Console. To be honest, I’m not sure that would ever happen, since it could reveal a bit too much of the secret sauce. I know they don’t want to provide too much link data either based on that happening.

I mean, imagine waking up one day and seeing this in Google Search Console. :)


Finding the Index Coverage reporting in Bing Webmaster Tools:
If you have at least 10K urls indexed in Bing, then you should be able to see the index coverage reporting for your site in the Sitemaps reporting. But, based on what I’m seeing, a number of sites do not have that option. If you don’t see the option, then I would make sure you are submitting xml sitemaps in BWT or including a reference to them in your robots.txt file.

For example, here is a large-scale site with sitemaps in BWT, but the index coverage option isn’t available.


Maybe the Index Coverage reporting is still rolling out to more sites… I’ll reach out to Bing’s Fabrice Canel to see why those sites don’t have index coverage reporting and then update this post with more information.

Reviewing content quality problems across sites: Were the urls actually low quality?
I was eager to investigate the “Content Quality” category across sites to see what types of content were surfaced there. So I dug in across several sites, and across several verticals. I’ll quickly cover what I found below.

First, although many of the urls were ones that I would consider lower-quality or thin, not all were. Do not take Bing’s word blindly… you definitely need to review the urls yourself. Some were exactly what I would surface as lower-quality, while others seemed ok for users (they were not great, but not terrible either)…

For example, I found the following types of lower-quality urls in the reporting across sites:

  • Short and unhelpful Q&A posts.
  • Thin press releases.
  • Thinner and dated news articles.
  • Spider traps. Little content leading to more thinner pages.
  • Ultra-thin business or organization listing pages.
  • Lower-quality content focused on sensitive categories (YMYL).
  • Thin video pages covered in ads.
  • Low-quality “reference” content.
  • Thin user profile pages.
  • Thin tutorials.

And more…

More Ways To Find Content Quality Problems in BWT:
After tweeting about this the other day, and thanking Fabrice Canel from Bing, he replied with an interesting note. Fabrice explained that the Index Coverage reporting wasn’t the only place you can surface content quality problems in Bing Webmaster Tools. He explained you can also see this when inspecting specific urls and via Site Explorer.

You are welcome @glenngabe. The same classifications are used in URL inspection, and in my favorite tool SEO Explorer. Here is a link to SEO Explorer filtered on content quality issues. https://t.co/NNgSX5U6Gn. Note: Data can be +/- 1 to 2 days not in sync between these tools.

— Fabrice Canel (@facan) September 22, 2023

When checking the link he provided, I noticed that Site Explorer was filtered by “URLs with other issues”. So it seems that category means the same thing as “Content Quality” in the Index Coverage reporting for sitemaps. In other words, it won’t say “Content Quality” in Site Explorer, but it means the same thing.

Finding quality problems in the Site Explorer feature in Bing Webmaster Tools


And when inspecting specific urls that were flagged as lower quality in the Sitemap Index Coverage reporting, I typically saw other categories appear for why the urls weren’t indexed. It did not say “Content Quality”. Fabrice did say the data might not be in sync and there could be a 1-2 day lag there between the tools, but it’s worth noting.

For example, a url that was flagged as “Content quality” in the Sitemap Index Coverage reporting actually yielded “Discovered but not crawled” when inspecting that url. That category can signal quality problems too, but it doesn’t say “Content quality”.

Cross-referencing the url inspection tool in Bing Webmaster Tools for urls that are flagged as low quality.


Summary – “Content Quality” is being flagged by a major search engine. Dig in there…
Again, I was pretty excited to see Bing Webmaster Tools provide a flag for content quality. With so much emphasis on “quality” from the major search engines, it’s great to dig in and analyze urls being surfaced as having quality issues. The reporting will never be perfect, and I would not blindly act on what’s being surfaced there, but it’s a nice starting point for site owners trying to understand content quality issues across their sites. I highly recommend digging in there. :)

GG

Filed Under: bing, google, seo, tools

Why Noindexing Syndicated Content Is The Way – Tracking 3K syndicated news articles to determine the impact on indexing, ranking, and traffic across Google surfaces [Case Study]

August 4, 2023 By Glenn Gabe Leave a Comment

Syndicated Content SEO Case Study

Last month John Shehata from NewzDash published a blog post documenting a study covering the impact of syndication on news publishers. For example, when a publisher syndicates articles to syndication partners, which site ranks and what does that look like across Google surfaces (Search, Google News, etc.)

The results confirmed what many have seen in the SERPs over time while working at, or helping, news publishers. Google can often rank the syndication partner versus the original source, even when the syndicated content on partner sites is correctly canonicalized to the original source.

And as a reminder, Google updated its documentation about canonicalization in May of 2023 and revised its recommendation for syndicated content. Google now fully recommends that syndication partners noindex news publisher content if the publisher doesn’t want to compete with that syndication partner in Search. Google explained that rel canonical isn’t sufficient since the original page and the page located on the syndication partner website can often be different (when you take the entire page into account including the boilerplate, other supplementary content, etc.) Therefore, Google’s systems can presumably have a hard time determining that it’s the same article being syndicated and then rank the wrong version, or even both. More on that situation soon when I cover the case study…

Google canonicalization help document with syndicated content recommendations.

And here is information from Google’s documentation for news publishers about avoiding duplication problems in Google News with syndicated content:

Previously, Google has said you could use rel canonical pointing to the original source, while also providing a link back to the original source, which should have helped their systems determine the canonical url (and original source). And to be fair to Google, they did also explained in the past that you could noindex the content to avoid problems. But as anyone working with news publishers understands, asking for syndication partners to noindex that content is a tough situation to get approved. I won’t bog down this post by covering that topic, but most syndication partners actually want to rank for the content (so they are unlikely to noindex the syndicated content they are consuming.)

Your conversations with them might look like this:

Syndication partners ignoring site owners.

The Case Study: A clear example of news publisher syndication problems.
OK, so we know Google recommends noindexing content on the syndication partner website and to avoid using rel canonical as a solution. But what does all of this actually look like in the SERPs? How bad is the situation when the content isn’t noindexed? And does it impact all Google surfaces like Search, Top Stories, the News tab in Search, Google News, and Discover?

Well, I decided to dig in for a client that heavily syndicates content to partner websites. They have for a long time, but never really understood the true impact. After I sent along the study from NewzDash, we had a call with several people from across the organization. It was clear everyone wanted to know how much visibility they were losing by syndicating content, where they were losing that visibility, if that’s also impacting indexing of content, and more. So as a first step, I decided to craft a system to start capturing data that could help identify potential syndication problems. I’ll cover that next.

The Test: Checking 3K recently published urls that are also being syndicated to partners.
I took a step back and began mapping out a system for tracking the syndication situation the best I could based on Google’s APIs (including the Search Console API and the URL Inspection API). My goal was to understand how Google was handling the latest three thousand urls published from a visibility standpoint, indexing standpoint, and performance standpoint across Google surfaces (Search, Top Stories, the News tab in Search, and Discover).

Here is the system I mapped out:

  • Export the latest three thousand urls based on the Google News sitemap.
  • Run the urls through the URL Inspection API to check indexing in bulk (to identify any type of indexing issue, like Google choosing the syndication partner as the canonical versus the original source). If the pages weren’t indexed, then they clearly wouldn’t rank…
  • Then check performance data for each URL in bulk via the Search Console API. That included data for Search, the News tab in Search, Google News, and Discover.
  • Based on that data, identify indexed urls with no performance data (or very little) as candidates for syndication problems. If the urls had no impressions or clicks, then maybe a syndication partner was ranking versus my client.
  • Spot-check the SERPs to see how Google was handling the urls from a ranking perspective across surfaces.

No Rhyme or Reason: What I found disturbed me even more than I thought it would.
First was the indexing check across three thousand urls, which went very well. Almost all of the urls were indexed by Google. And there were no examples of Google incorrectly choosing syndication partners as the canonical. That was great and surprised me a bit. I thought I would see that for at least some of the urls.

Indexing check across recent news articles.

Next, I exported performance data in bulk for the latest three thousand urls. Once exported, I was able to isolate urls with very little, or no, performance data across surfaces. These were great candidates for potential syndication problems. i.e. If the content yielded no impressions or clicks, then maybe a syndication partner was ranking versus my client.

GSC performance data across recent news articles.

And then I started spot-checking the SERPs. After checking a number of queries based on the list of urls that were flagged, there was no rhyme or reason why Google was surfacing my client’s urls versus the syndication partners (or vice versa). And to complicate things even more, sometimes both urls ranked in Top Stories, Search, etc. And then there were times one ranked in Top Stories while the other ranked in Search. And the same went for the News tab in Search and Google News. It was a mess…

I’ll provide a quick example below so you can see the syndication mess. Note, I had to blur the SERPs heavily in the following screenshots, but I wanted to provide an example of what I found. Again, there was no rhyme or reason why this was happening. Based on this example, and what I saw across other examples I checked, I can understand why Google is saying to noindex the urls downstream on syndication partners. If not, any of this could happen.

First, here is an example of Yahoo Finance ranking in Top Stories while the original ranks in Search right below it:

Syndication partner ranking in Top Stories while the original source ranks in Search.

Next, Yahoo News ranks twice in the News tab in Search (which is an important surface for my client), while the original source is nowhere to be found. And my client’s logo is shown for the syndicated content. How nice…

Syndication partner ranking twice in the News tab of Search over the original source.

And then in Google News, the original source ranks and syndication partners are nowhere to be found:

The original source ranking in Google News over syndication partners.

As you can see, the situation is a mess… and good luck trying to track this on a regular basis. And the lost visibility across thousands of pages per month could really add up… It’s hard to determine the exact number of lost impressions and clicks, but it can be huge for large news publishers.

Discover: The Personalized Black Hole
And regarding Discover, it’s tough to track lost visibility there since the feed is personalized and you can’t possibly see what every other person is seeing in their own feed. But you might find examples in the wild of syndication partners ranking there versus your own content. Below is an example I found recently of Yahoo Finance ranking in Discover for an Insider Monkey article. Note, Insider Monkey is not a client and not the site I’m covering in the case study, but it’s a good example of what can happen in Discover. And if this is happening a lot, the site could be losing a ton of traffic…

Here is Yahoo Finance ranking in Discover:

Syndicated content ranking over the original source in Google Discover.

And here is the original article on Insider Monkey (but it’s in a slideshow format). This example shows how Google can see the pages are different, which can cause problems understanding that they are the same article:

Original article that is being syndicated to Yahoo Finance.

And here is Yahoo Finance ranking #2 for the target keyword in the core SERPs. So the syndication partner is ranking above the original in the search results:

Syndication partner outranking the original source in Search.


Key points and recommendations for news publishers dealing with syndication problems:

  • First, try to understand indexing and visibility problems the best you can. Use an approach like I mapped out to at least get a feel for how bad the problem is. Google’s APIs are your friends here and you can bulk process many urls in a short period of time.
  • Weigh the risks and benefits of syndicating content to partners. Is the additional visibility across partners worth losing visibility in Search, Top Stories, the News tab in Search, Google News and Discover? Remember, this could also mean a loss of powerful links as well… For example, if the syndication partner ranks, and other sites link to those articles, you are losing those links.
  • If needed, talk with syndication partners about potentially noindexing the syndicated content. This will probably NOT go well… Again, they often want to rank to get that traffic. But you never know… some might be ok with noindexing the urls.
  • Understand Discover is tough to track, so you might be losing more traffic there than you think (and maybe a lot). You might catch some syndication problems there in the wild, but you cannot simply go there and find syndication issues easily (like you can with Search, Top Stories, the News tab, and Google News).
  • Tools like Semrush and NewzDash can help fill the gaps from a rank tracking perspective. And NewzDash focuses on news publishers, so that could be a valuable tool in your tracking arsenal. Semrush could help with Search and Top Stories. Again, try to get a solid feel for visibility problems due to syndicating content.

Summary – Syndication problems for news publishers might be worse than you think.
If you are syndicating content, then I recommend trying to get an understanding of what’s going on in the SERPs (and across Google surfaces). And then form a plan of attack for dealing with the situation. That might include keeping things as-is, or it might drive changes to your syndication strategy. But the first step is gaining some visibility of the situation (pun intended). Good luck.

GG

Filed Under: google, seo, tools

Disavowing The Disavow Tool [Case Study] – How a site owner finally removed a disavow file with 15K+ domains, stopped continually disavowing links, and then surged back from the dead

June 15, 2023 By Glenn Gabe Leave a Comment

Google Disavow Case Study

There aren’t many topics in SEO as controversial as disavowing links. Ever since Google introduced the ability to disavow links, there has been a ton of confusion about how to use the disavow tool, what types of links should be disavowed, when to ignore the disavow tool altogether, and more. Then add third-party tools that evaluate links for site owners and flag some as “toxic”, and you have a dangerous recipe of confusion and fear. And that fear makes it easy for some site owners to spend a lot of time continually disavowing links, spinning their wheels, and without any way to know if it was actually helping them.

I cannot tell you how many companies have contacted me over the years explaining they take time every week or month to review their latest links via third-party tools and determine what to disavow. They are deathly afraid of some type of negative algorithmic action Google will take based on random, spammy links showing up in their link profiles. The fact of the matter is that every site builds random, junky, spammy links over time. It’s not unusual to see those random links show up in a link profile. Google has explained this as well.

Here is just one of John Mueller’s tweets where he explains this. I’ll cover more about Googler comments about the disavow tool soon.

We already ignore links from sites like that, where there are unlikely to be natural links. No need to disavow :)

— John Mueller (official) · #StaplerLife (@JohnMu) December 2, 2019

It’s also important to note that many of those companies reaching out to me have never bought links, participated in any link schemes, etc. After I explain more about the disavow tool to those site owners, why it was created, when Google actually recommends to use it (hint, not often), my calls with those site owners often go eerily silent.

Note, I am NOT referring to sites that have set up unnatural links in the past, had manual actions for unnatural links, participated in link schemes, etc. If you have, then you should take care of that situation, which could involve using the disavow tool. For example, you should have those links removed, nofollowed, and if you can’t do that, then you can use the disavow tool. But the reason you would be doing this is to avoid a manual action based on setting up unnatural links and not because of some boogeyman algorithm that’s going to downgrade your site.

I have also been extremely vocal that I believe Google could remove the disavow tool from Search Console altogether, and maybe even this year. With SpamBrain now neutralizing unnatural links (as of the December 2022 Link Spam Update), I can’t imagine Google will need to provide a disavow tool for long (at least one that’s available to use any time you want). That’s just my opinion, but it does make a lot of sense. Here’s a video I recorded with Barry Schwartz where we cover the disavow tool and how it could eventually go away (at 11:46 in the video):

Nuking A (Large) Disavow File: A Case Study
I am writing this post because I have a great case study to share. And this case follows a number of other companies I have helped that have decided to nuke their disavow files after understanding how the tool is supposed to be used. Those companies haven’t seen any negative impact from nuking their disavow files. And for the case study I’m going to share today, you’ll see how a site completely removed a large disavow file with 15K+ domains in it and actually surged back from the dead after being down based on a terrible migration.

And I’m not saying the site surged back due to removing the disavow file. I’m simply saying the file didn’t matter at all. They surged back AFTER removing that giant disavow file filled with random, junky, spammy domains.

Note, this is a blinded case study, since everyone and their mother would over-analyze the situation if I revealed the site. But it’s too good of an example to sit in the G-Squared Interactive archives, locked away for only me to see. Everyone in the SEO community should read the case, speak with their clients about disavowing, and determine the best way forward. And for most sites (99.99%), that’s probably removing their disavow file and stopping the continual disavowing of links that don’t need to be disavowed. Again, that’s unless the site actively built unnatural links, participated in link schemes, had a manual action in the past, or think they could get one based on their own unnatural link building.

The Disavow Tool Is Buried In GSC:
I have been saying for a long time that most site owners never need to touch the disavow tool. From Google’s move to devalue unnatural links with Penguin 4 to SpamBrain now neutralizing unnatural links, Google is not penalizing random, junky, spammy links. Instead, it’s just ignoring them. Manual actions for unnatural links have dropped off a cliff over the past several years, which makes complete sense based on Google neutralizing random spammy links like I just mentioned.

Unless you actively set up unnatural links to try and game Google’s algorithms, then you should never, ever have to touch the disavow tool. That is one reason it’s literally buried in the GSC user interface. Seriously, try and go find it. It will probably take you a few minutes since you can’t access the disavow tool from any of the menus in GSC.

And think about it, if Google really wanted you to use the disavow tool all the time, why in the world would they bury it in Search Console? You can’t even search the help documentation from GSC anymore to quickly find it. You have to go the help docs, search for the disavow tool, and then scroll way down the page to find the text link to the tool. Again, it’s buried.

Disavow Tool in Google's help documentation

Googler Comments About The Disavow Tool:
In addition, there have been some great quotes from Googlers about the disavow tool over the years. From Gary Illyes explaining many sites end up hurting their efforts by using the disavow tool to John Mueller’s continuous communications that most site owners should never use the tool, it’s a surprise that some site owners still disavow links on a regular basis.

Here are some of Gary’s comments about the disavow tool (@methode is Gary’s Twitter handle). These are people quoting Gary based on Pubcon presentations, Q&A sessions Gary has done, etc.

Disavowing links are just for comfort. Probably stupid to disavow links: @methode @Pubcon #pubcon

— Clark Taylor (@clarktaylor) February 27, 2023

Discussing disavow files @methode said it most likley isn't doing anything.

He clearly stated that the number of sites who shot themselves in their foot with these is higher than the number of sites he thought would of benefitted from a disavow file. #Pubcon

— Joe Youngblood (@YoungbloodJoe) February 27, 2023

"If you do not have a manual action then you do not need to submit a disavow!" @methode #Pubcon Great questions by @jimboykin

— Brian McDowell (@brian_mcdowell) November 8, 2017

And here is a great quote that always stood out to me. Gary even said that if it was up to him, he would remove the tool. Yes, a Googler saying he would remove the disavow tool…

Q: How often do site owner disavow links that hurt them.
A: It's often enough that if it were me I'd remove the disavow tool. If you don't know what you are doing you can shoot yourself in the foot.@methode @jenstar #Pubcon

— Marie Haynes (@Marie_Haynes) October 10, 2019

And here is a quote from Gary about not being afraid of sites you don’t know and how he trusts the Google filters:

Gary Illyes on using the disavow tool.

Moving on, here are several tweets from Google’s John Mueller (there are more, but I can’t provide all of them here):

To be honest, anyone who does not know, should *not* use it. That's why the tool is not a part of the search console UI. That's why our messaging has been consistently to not use it unless you know there's an actual issue. To paraphrase: When in doubt, leave disavow out.

— John Mueller (official) · #StaplerLife (@JohnMu) March 8, 2023

You don't need to disavow random spammy links like that.

— John Mueller (official) · #StaplerLife (@JohnMu) December 15, 2022

We already ignore links from sites like that, where there are unlikely to be natural links. No need to disavow :)

— John Mueller (official) · #StaplerLife (@JohnMu) December 2, 2019

That seems like a terrible idea. (also, none of those metrics are things Google cares about, as any SEO tool will tell you… hopefully)

— John Mueller (official) · #StaplerLife (@JohnMu) May 3, 2023

Even the disavow tool itself explains you should not use the tool unless you have a manual action, or you think you could get one. Google’s messaging there is pretty aggressive and scary. Let’s face it, they don’t want site owners using the tool for random, spammy links that show up in a link profile.

Disavow Tool messaging

But not everybody thinks that messaging is clear, including the site owner I helped. So, I asked the site owner if he wanted to provide a quote about the confusion he initially had about disavowing based on Google’s messaging in the tool and in the documentation. Like many site owners, he would like to see clearer messaging from Google about when to use the tool and when it’s necessary.

So here we go, in the site owner’s own words:

“Google documents about how, and when, to use the disavow tool are too open to interpretation, especially in situations after an (algorithm) update or when your rankings rapidly decline out of the blue. I hope others that read this case study will not interpret Google’s messaging like I did, and instead look for all other possible outlets before deciding to use the disavow tool. I very much hope this tool gets nuked!”

With that out of the way, I’ll cover the case study.

The Case: Down And Out, And Disavowing In Circles:
In the fall of 2022 I received a dire email from a site owner. The site changed domain names and it went very, very wrong. It really wasn’t their fault in my opinion… they were an edge case (which can happen with any migration). I’ve seen this a number of times over the years unfortunately, which is why I tell most site owners to NEVER change domain names unless it’s absolutely necessary. They were down about 70-80% and not coming back.

Here is trending when blending GSC click data from both domains via Looker Studio:

Google Search Console clicks trending after a botched domain name change.

After digging in a bit, I noticed they were disavowing links. Many of them. So I asked if they ever set up unnatural links, if they ever had a manual action in the past, if they participated in any link schemes, etc. The site owner emphatically explained they NEVER set up any unnatural links. They just noticed many random, junky links in their link profile and they were scared those links would negatively impact rankings.

So, I explained that most sites end up with random and junky links like that and there was really nothing to worry about on that front. But the site owner had read many posts explaining how dangerous those links were, how it could drag your site down over time, and that actively disavowing links was the way to go. When they contacted me, they had over 15K+ domains in their disavow file.

A huge disavow file with 15K+ domains.

After sending the site owner quote after quote from Googlers about the disavow file, they started to come around. They believed me, but were deathly afraid to nuke their disavow file. So they removed it for a bit. Nothing changed at all, but again, they were down about 70-80% from the domain name change that went sideways. So when nothing changed, they added the disavow back out of fear it would keep them down.

See how that works? The fear of some “boogeyman” algorithmic action led them to continually include a disavow file filled with random, junky domains. And they continued to spend time analyzing their link profile on a regular basis, and adding more and more domains to the disavow file over time. It was a maddening spiral of disavowing links. And I was determined to get the site owner out of the death spiral.

By the way, checking a number of the domains and links revealed some weren’t even indexed. And if the pages aren’t indexed, the links can’t hurt you anyway. I sent this information to the site owner as well.

Unnatural link not even indexed.

Business-wise, the site owner had to lay off most of his employees based on the domain name situation. It was sad to see… especially since if they would have contacted me before the domain name change, I would have told them to NOT change domain names unless absolutely needed. For their situation, it wasn’t absolutely needed. It was a nice-to-have thing they wanted to do. Bad move and they were paying a heavy price.

After analyzing the situation, and having seen edge case migrations like this before, I truly believed they needed a major algorithm update to roll out, which could bring a site quality re-evaluation. For example, a broad core update or reviews update could possibly help them surge back from the dead.

The December 2022 helpful content update (HCU) and product reviews update (PRU) rolled out and the site didn’t surge back, but I still had hope. I told them to sit tight and let’s see if another update could help them out… Then the March 2023 broad core update rolled out, and they still didn’t recover… But again, as of that time, the disavow file with 15K+ domains was still being used…

Pulling The Band-Aid Off. FINALLY Nuking The Disavow File:
The site owner was strong throughout this entire situation. He listened to my guidance, continued to improve the site the best he could with reduced staff, and had faith things could come back at some point. And as a last-ditch effort, they decided to completely remove the disavow file in late March. I was thrilled they made this decision. It was a long time coming…

So the disavow file was removed that contained 15K+ domains. Poof, it was gone. Now the site owner needed to continue to drive forward with the site, forget about disavowing, and just focus on improving the site as much as possible quality-wise.

Disavow file removed from Search Console.

And Along Came The April 2023 Reviews Update (With A BIG Tremor):
On April 12, 2023 Google rolled out the reviews update (RU), a major algorithm update that could impact any site with reviews or recommendations content. Google’s product reviews update (PRU) evolved to just the reviews update and it now evaluated more than just product reviews. It made the earth shake for many sites… and it came packed with a powerful tremor on 4/19. With that tremor, it looked to me like Google refreshed a site-level quality algorithm (or several).

Well, a funny thing happened with the site I’m covering in this case study. It began to surge with that tremor. And I mean REALLY surge. Rankings started coming back big-time for the site. And it was their most powerful queries returning from the dead. The site jumped 5, 10, and 20+ rankings for key terms. It was amazing to see.

The site is now up 140% based on the April reviews update, and that’s without a single disavowed link. And that’s down from 15K+ disavowed domains in the past. It’s a great example of why I believe a disavow file is NOT necessary for 99.99% of sites. They clearly didn’t need to be disavowing those links…

Surge in clicks based on the April reviews update tremor.

Here is search visibility surging for the site based on that tremor (Sistrix data):

Surge in visibility based on the April reviews update tremor.

And here is a snapshot of the site’s rankings surging with the April reviews update tremor. Over 31K keywords increased in position that now rank in the top 10. Some of those keywords weren’t even ranking in the top 100 before the April reviews update tremor:

Surge in rankings based on the April reviews update tremor.

And again, I’m not saying they surged back due to removing the disavow file. I’m just making the point that the disavow file wasn’t doing anything (in my opinion). They surged back from the dead without the file in place.

The site owner has been through so much with the drop based on the domain name change debacle, weird volatility over time that never turned out well in the long-term, etc., that they are still fearful this won’t last. We are now almost two full months out from the surge, and they still don’t feel comfortable. And I get it. When you’re an edge case, the battle scars remain. That said, it’s great to see the site back doing so well. Let’s hope things continue that way.

Key Points About Disavowing Links For Site Owners:
I’ll end this post with some key points for site owners that are actively disavowing links or thinking about disavowing. This is based on my experience helping many companies over time:

  • In my opinion most sites, and I mean 99.99%, don’t ever need to use the disavow tool.
  • The disavow tool is buried in the GSC UI for a reason. As Google would explain, that’s by design.
  • Several Googlers have explained that most site owners never need to use the disavow file. And that it should only be used if you have a manual action, or think you could get one based on buying links, participating in link schemes, etc. It’s not for random, spammy links that show up in your link profile.
  • Do not just start disavowing random, junky, spammy links that show up. Most sites have those types of links and Google is very good at ignoring them.
  • Do not simply look at third-party tools that flag links as “toxic” and think you need to move quickly to disavow those links. Google has addressed that as well, and said it’s a terrible idea to disavow links purely based on what some third-party tool claims is “toxic”. Sure, you can go analyze those links, but if you haven’t set up unnatural links in the past, then there’s no reason to worry about that.
  • When to use the disavow file: Now, if you did buy links, or participate in some type of link scheme, then you should look to remove those links or nofollow them. And if you can’t for some reason, then it’s fine to disavow them. Again, do not just start disavowing random, junky, spammy links you see showing up in a link profile. If you truly weren’t involved in setting those up, just ignore them and move on.

Summary – For this site owner, nuking a giant disavow file and stopping the disavow madness was the way forward.
I hope this case study helped you learn more about the disavow tool, Google’s advice about using it, and why most site owners never need to use it. Google has gotten very, very good at simply ignoring random, spammy links on the web and there’s no reason to start disavowing those links. So, if you haven’t set up unnatural links, paid for links, or participated in some type of link scheme, then you should step away from the disavow tool. Just continue to improve your site the best you can and avoid over-analyzing links. For the site owner I covered in this case study, that was the path forward.

GG

Filed Under: google, seo, tools

Unraveling SERP Features – How to track and analyze urls ranking within Top Stories, People Also Ask, Image Packs, Short Videos, Recipe Packs, and more using Semrush’s New SERP Features reporting

April 17, 2023 By Glenn Gabe Leave a Comment

Semrush SERP features upgrade.

Google’s search engine result pages are filled with an amazing combination of features to present many different types of content. From images to video to news to recipes, there’s no shortage of interesting features at your fingertips. And for site owners and SEOs, we want to track all of them, and at a granular level.

Although Google Search Console (GSC) provides a wealth of information, it can fall short when it comes to SERP feature data. Two important examples of that include the lack of tracking of Top Stories and Featured Snippets in GSC. They are two incredibly important search features that unfortunately do not have their own filtering options in GSC. And beyond just filtering of a specific feature, there are many search features that contain a carousel or list of urls, which would be incredible to unravel for analysis purposes.

But that’s where third-party tools come in handy. Their ability to track SERP features across many sites provides site owners and SEOs powerful information about how their content, and competitors’ content, are ranking in the search results (and across both desktop and mobile). That’s why I was super-excited to see the latest update from our friends at Semrush. They rolled out upgraded SERP features functionality that takes tracking the SERPs to a new level.

In this post, I’ll walk you through unraveling SERP feature rankings to reveal the urls and content ranking within them. We’ll pull apart People Also Ask (PAA), Top Stories, Video Packs, Image Packs, Recipe packs, and more. By the end, I’m confident you will be eager to analyze your own site (and your competitors).

Semrush now tracks a whopping 38 features, up from 24:
First, with the upgrade, Semrush now tracks a whopping 38 SERP features. That’s up from 24 previously tracked. Some of the new additions include powerful features like Top Stories, Short Videos, Twitter Carousels, Recipes, Popular Products, Web Stories, and more.

I just wanted to point this out since it underscores how dynamic the Google search results have become. Here is a quick visual from Semrush showing the SERP features tracked now:

Semrush tracks 38 SERP features

Understanding which SERP features are accessible per tab in Semrush:
Based on the SERP features addition, there seems to be some confusion about where to find the various SERP features within each tab in the Positions reporting (Organic versus SERP features). To help site owners and SEOs understand where to find each feature, Semrush published a helpful visual that breaks out each feature by tab.

The features highlighted in yellow can be found under the SERP features tab, the features highlighted in purple can be found under the Organic tab, and the features highlighted in green can be found under both tabs.

A SERP Features test drive – Traversing the reporting in Semrush:
I think the best way to show you the new SERP features reporting is to walk you through some examples. Let’s fire up Semrush and analyze espn.com and some specific queries.

In the overview tab of the Organic Research reporting, you can scroll down the main dashboard and you’ll see a new report titled “SERP Features Trend”. This is a global snapshot of the domain you are analyzing from a SERP feature standpoint.

You will see a stacked bar chart with trending by SERP feature. This can be done by day or month, depending on the date selection you have active. For example, the reporting launched on April 6, so you will see the SERP features the domain ranks for in the stacked bar chart (based on the number of keywords per feature). You can select or deselect specific SERP features, which are then reflected in the bar chart below. This enables you to view the changes for each SERP feature over time.

Here is the stacked bar chart for all SERP features for espn.com:

SERP features trending in Semrush

For example, espn.com ranks in People Also Ask for 108K+ queries:

A breakdown of SERP features ranking for a domain in Semrush

Moving to the Positions report – A new tab for SERP Features:
Next, you can select the Positions report to view all queries the site ranks for and their associated data. But you’ll notice two new tabs, Organic and SERP Features. If you click the SERP Features tab, you will filter the queries by keywords that rank in specific SERP features. That’s an awesome way to dig into SERP feature data.

The new SERP features tab in the positions report in Semrush

For each query, you will see icons for the SERP feature it ranks within. For example, below you can see queries that rank in People Also Ask (PAA), Top Stories, Video packs, Short Videos, and more. And remember, you can view this for both mobile and desktop (separately).

SERP features in the Positions reporting in Semrush

If you want to see the screenshot for the query where the site ranks in a SERP feature, click the SERP snapshot icon on the right side of the report. Boom, you can see what was captured.

The SERP snapshot icon in Semrush for viewing a captured search engine result page

If you click a specific SERP feature icon, then the Positions reporting will be filtered by just that feature. It’s yet another way to slice and dice SERP feature data. You can see below that the reporting is filtered by queries where espn.com ranks in Top Stories.

Filtering the Positions reporting by a specific SERP feature

Jumping to the Keyword Overview reporting and unraveling urls ranking within each feature:
This is probably my favorite new feature. Not only can you see that a site ranks within a specific SERP feature for a query, but you can reveal all of the urls that rank within that feature. For example, you might see a query where the site ranks in Top Stories, but wouldn’t it be great to see all of the urls ranking in that Top Stories module? You can do that now via Semrush via the Keyword Overview report.

For example, I clicked the query “New York Yankees” in the Positions reporting, which opens the Keyword Overview report. You can scroll down to the SERP Analysis module to see the urls ranking in the top 100 listings for that query. You will notice some new SERP features in the list ranking in the top ten. For example, you will see a Twitter carousel, Top Stories, and a Knowledge Panel for this query.

The Keyword Overview report in Semrush with SERP features listed.

That’s cool, but that’s not all you can do here. You can actually click the arrow icon next to each feature to reveal the urls ranking within the feature! Yes, you can unravel SERP features to quickly view which urls and content are ranking there. That’s awesome.

Unraveling SERP features in the Keyword Overview reporting in Semrush to reveal urls ranking within each feature

And here I’m unraveling the Twitter carousel to view the tweets and PAA to see the urls ranking in the default PAA for the query:

Viewing the urls ranking in a Twitter carousel and People Also Ask (PAA) in the Keyword Overview report in Semrush

As another example, I entered “Yankee Stadium” in the tool and checked the top ten. And wow, check out all of the SERP features ranking.

There’s a Twitter carousel, Knowledge Panel, Image pack, urls with FAQ snippets, and more. And again, you can unravel some of those SERP features to see the urls ranking within them.

The Keyword Overview report for the query Yankee Stadium revealing all of the SERP features ranking for the query

And if I switch to mobile, you can see a Short Videos feature in the list. Click the arrow to reveal the urls of each video ranking in the feature. Again, it’s awesome to be able to quickly do this via Semrush.

Viewing the urls ranking in the Short Videos feature in the Keyword Overview report in Semrush

Here is what the Short Videos feature looks like (typically containing four short videos). With Semrush, you can now see each of the videos ranking within the feature:

The Short Videos SERP feature in Google

A note about image packs:
One issue I have with the new functionality is that urls ranking in image packs aren’t collapsed behind the arrow like other SERP features are. Each image is listed separately versus being in one SERP feature that you can expand. I would much rather know that an image is contained in the pack and view them all together. Right now, you might see six images taking up the top ten, but in reality, they are part of one SERP feature. I’ll send that feedback to Semrush to see what they think.

For example, these urls are in one SERP feature, but listed separately:

Image packs ranking in the Keyword Overview report in Semrush

Here is what an image pack looks like for the query in the mobile SERPs:

An image pack in the Google mobile search results.

Exporting SERP Features
It’s important to note that you can also export the SERP features data per query. For example, exporting the Keyword Overview report for “New York Yankees” provides the urls ranking within each SERP feature. Below, you can see the urls ranking in Top Stories, the Knowledge Panel, and PAA.

Exporting SERP features in Semrush

Another example of unraveling SERP Features – Recipes
As another quick example, let’s unravel a recipes carousel to reveal the urls and content ranking in the feature. I entered epicurious.com in the Organic Research reporting, clicked the SERP features tab in the Positions reporting, and then clicked the query “stout cake recipe”, which I saw ranked in a recipes carousel.

Viewing recipe SERP feature data in Semrush

In the Keyword Overview report for the query, I can see a recipes carousel ranks first in the SERP. Clicking the arrow icon reveals the four recipes ranking there, including one url from epicurious.com.

Viewing urls ranking in a Recipes carousel in the Keyword Overview report in Semrush

Quickly checking the SERP snapshot or a live SERP yields the carousel and epicurious.com ranking in the feature. Again, it’s powerful to be able to view this data via a third-party tool (for any site and query):

A recipes carouse in the Google search results.

Summary – Identifying and Unraveling 38 SERP features via Semrush
I hope this post helped you learn more about the new SERP features functionality in Semrush. Again, the search results are filled with powerful features, so it’s important to understand when your content is ranking within those features. And now with Semrush, you can easily view the urls ranking within each feature (and across the competition). I recommend checking out the new SERP features reporting soon. I think you’ll dig it. Have fun.

GG

Filed Under: google, seo, tools

How to compare hourly sessions in Google Analytics 4 to track the impact from major Google algorithm updates (like broad core updates)

March 15, 2023 By Glenn Gabe Leave a Comment

Hourly tracking in Google Analytics 4

I was just asked on Twitter if there was an easy way to compare Google organic traffic hourly like you can in Universal Analytics. That’s a great question, and that’s a super useful report to have as major algorithm updates roll out. You can typically start to see the separation over time as the update rolls out (if your site was heavily impacted by a major update like broad core updates, Product Review Updates, etc.)

So I fired up GA4 and created a quick exploration report for analyzing hourly traffic. Here is a short tutorial for creating the report:

1. Fire up GA4 and click the “Explore” tab in the left-side menu.

Explore tab in Google Analytics 4

2. Click the “Free Form” reporting option.

Free form exploration reporting in Google Analytics 4

3. Click the plus sign next to “Segments” to add a new session segment. Then create a segment for Google Organic by adding a new condition, selecting “Session source / medium” and then adding a filter for “google / organic”.

Creating a segment for Google Organic in Google Analytics 4
Selecting session source and medium and then filtering by Google Organic when creating a new segment in GA4

4. Add that segment to your reporting by dragging it to the “Segment Comparisons” section of the report.

Adding a segment to the reporting in Google Analytics 4

5. Set “Granularity” to Hour.

Selecting Hour as the granularity for the reporting in Google Analytics 4

6. Add a new metric and select “Sessions”. And then drag “Sessions” to “Values”.

Adding sessions as a metric in Google Analytics 4

7. Change the visualization to line chart by clicking the line chart icon.

Changing the visualization of the reporting to line graph in Google Analytics 4

8. For timeframe, select “Compare” and choose a day. Then choose the day to compare against. Note, GA4 isn’t letting me choose today (which is a common way to see how the current day compares to a previous day). So, you’ll have to just compare the previous day to another day. Sorry, I didn’t create GA4.

Comparing timeframes in Google Analytics 4

9. Name your report and enjoy comparing hourly sessions.

I hope you found this helpful, especially since the March 2023 broad core update is currently rolling out. Have fun. :)

GG

Filed Under: algorithm-updates, google, google-analytics, seo, tools, web-analytics

  • 1
  • 2
  • 3
  • …
  • 16
  • Next Page »

Connect with Glenn Gabe today!

Latest Blog Posts

  • The September 2023 Google Helpful Content Update – Did Google’s Announcement in April About Page Experience Foreshadow What We’re Seeing With The Current HCU(X)?
  • How To Find Lower-Quality Content Being Excluded From Indexing Using Bing’s XML Sitemap Coverage Report (and Its “Content Quality” Flag)
  • How To Bulk Export GSC Performance Data For A Specific List Of URLs Using The Google Search Console API, Analytics Edge, and Excel
  • Analyzing the removal of FAQ and HowTo snippets from the Google search results [Data]
  • Why Noindexing Syndicated Content Is The Way – Tracking 3K syndicated news articles to determine the impact on indexing, ranking, and traffic across Google surfaces [Case Study]
  • Jarvis Rising – How Google could generate a machine learning model “on the fly” to predict answers when Search can’t, and how it could index those models to predict answers for future queries [Patent]
  • Analysis of Google’s Perspectives Filter and Carousel – A New Mobile SERP Feature Aiming To Surface Personal Experiences
  • People Also Search For, Or Do They Always? How Google Might Use A Trained Generative Model To Generate Query Variants For Search Features Like PASF, PAA and more [Patent]
  • Disavowing The Disavow Tool [Case Study] – How a site owner finally removed a disavow file with 15K+ domains, stopped continually disavowing links, and then surged back from the dead
  • Google’s April 2023 Reviews Update – Exploring its evolution from PRU to RU, a powerful tremor on 4/19, and how its “Review Radar” found larger publishers

Web Stories

  • Google’s December 2021 Product Reviews Update – Key Findings
  • Google’s April 2021 Product Reviews Update – Key Points For Site Owners and Affiliate Marketers
  • Google’s New Page Experience Signal
  • Google’s Disqus Indexing Bug
  • Learn more about Web Stories developed by Glenn Gabe

Archives

  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • August 2021
  • July 2021
  • June 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • GSQi Home
  • About Glenn Gabe
  • SEO Services
  • Blog
  • Contact GSQi
Copyright © 2023 G-Squared Interactive LLC. All Rights Reserved. | Privacy Policy
This website uses cookies to improve your experience. Are you ok with the site using cookies? You can opt-out at a later time if you wish. Cookie settings ACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience. You can read our privacy policy for more information.
Cookie Consent