The Internet Marketing Driver

  • GSQi Home
  • About Glenn Gabe
  • SEO Services
    • Algorithm Update Recovery
    • Technical SEO Audits
    • Website Redesigns and Site Migrations
    • SEO Training
  • Blog
    • Web Stories
  • Contact GSQi

Amazing Search Experiments and New SERP Features In Google Land (2022 Edition)

June 29, 2022 By Glenn Gabe Leave a Comment

This post is based on my presentation at the Google SEO Meetup in New York City in June of 2022.

Google New York City Headquarters in June 2022

On Monday I had the opportunity to present at the Google SEO Meetup in New York City, which was set up by Google’s Daniel Waisberg. I had a great time speaking with a number of SEOs from across companies about the latest in Search, as well as several Googlers. In addition, Google’s Danny Sullivan presented an “update on updates” and Barry Schwartz interviewed Lily Ray and Romain Damery from Amsive Digital about overcoming challenges with agency SEO. Both were great sessions.

And wedged in between those two sessions, I presented some of the latest Search experiments that Google is testing in the wild. I received a lot of positive feedback about the presentation, so I decided to write a post covering my slides. My presentation contains various tests and new SERP features that could signal what’s to come in the future for SEOs in Search and Discover.

Note, these aren’t the only tests we’ve seen run recently, but they are some of the most interesting (and recent) that I have come across! Let’s jump in.

If you want to jump to a certain feature or test, here’s a quick table of contents:

  • Multi-source featured snippets
  • Journeys in Chrome
  • Discover’s “More Recommendations”
  • Google’s Explore Wonderland
  • The Grid Treatment
  • “Things To Know”
  • Short Videos vs. Visual Stories
  • The Versatile Right-side Panel
  • Ad Label Tests

Multi-source Featured Snippets
Featured snippets can be very powerful for site owners. Any time Google provides a special SERP treatment that separates your listing from the pack, you can end up receiving a ton of clicks. Sure, some featured snippets can reveal the answer, but based on helping many different clients across verticals, I’ve found that featured snippets can drive massive amounts of traffic when the page, article, or post is covering a topic in detail.

But that’s when there’s only one listing featured. What if there were several sources featured?

That’s one of the tests we saw running recently. Google was replacing a single listing with multiple listings. There were up to five different sources in the featured snippet and the label of the SERP feature was “From the web”.

Multi-source featured snippets test in Google

On the one hand, this can be great for users who want to gain a more rounded answer. And it provides a greater opportunity for the other sites to gain more traffic from the query. But on the other hand, the site that used to be featured could get much less traffic. Time will tell if this rolls out, but it’s definitely a big heads-up if featured snippets are important for you.

Journeys In Chrome
This is not a test. It has rolled out already. Google rolled out Journeys in Chrome in early 2022, which can help you resume research of a given topic. While searching in Chrome, you might see “Resume your journey” show up in the omnibox. And when clicking that button, you are taken to the Journeys page in Chrome where you can see a number of helpful things.

You can find pages and sites you visited, you can view your previous searches, and Google provides related searches.

Journeys in Chrome

Journeys, Meet the sidebar in Chrome
Even though Journeys has rolled out, Google is testing ways to get that feature more exposure. For example, Google is testing showing Journeys in the sidebar in Chrome. That would join your reading list and bookmarks. It seems Google is trying to give Journeys more visibility. You can see this test now in Chrome Canary.

Journeys in Chrome sidebar test

Discover’s “More Recommendations”
After seeing this next feature in action, I wrote an entire blog post about it. I think it’s a powerful feature that can be expanded in the future (beyond Discover). While browsing my Discover feed after researching products, I noticed a “More recommendations” button. After tapping that button, I was taken to a “Task Dashboard” in Discover that provided a ton of content related to the products I was researching.

For example, there were recently viewed pages I had visited, suggested articles, videos, similar products, and even the ability to compare those products. And when tapping multiple products using the compare feature, I was taken to a fresh search results page with information comparing the products I selected!

If you focus on e-commerce, or if you publish product reviews, “More recommendations” should be on your radar.

Discover's More Recommendations feature

I also explained in my post and presentation that Google is expanding the functionality there. I can now track products and even get notified when there are price drops. It’s like a shopping assistant baked into Discover.

Price Drop in Discover's More Recommendations feature

Combining “More Recommendations” with Journeys
Imagine Google combined Journeys with “More recommendations” and introduced a new SERP feature packed with punch. Maybe you could see that in the search results after searching for related queries about the product or category. And clicking that feature could take you to a “Task Dashboard” that contains a boatload of helpful content, price tracking, Journeys content letting you resume your research, and more. And maybe even the ability to BUY directly from the dashboard! Google already has “Buy from Google” in Google shopping, so it’s totally possible.

Here’s a mockup of what that SERP feature could look like:

Combining Journeys in Chrome with More Recommendations in Discover

Google’s Explore Wonderland at the END of the SERP
The next test I saw was one of the most interesting I have seen in a long time. It was originally surfaced by Mordy Oberstein, who first said “Holy cannoli!”, and then asked “What is this sorcery?”

Holy cannoli!

Have y'all ever seen this "Explore" section of the SERP?

After scrolling down far enough seeing a whole subtopical parsing under a new section of the SERP entitled "Explore" – wild!

What is this sorcery?!

cc: @rustybrick pic.twitter.com/CY0yr2rL8F

— Mordy Oberstein 🇺🇦 (@MordyOberstein) June 13, 2022

And the next day, I triggered Explore as well (multiple times). And “sorcery” is a good word for it. After scrolling through the search results through multiple pages of listings, the Explore feature showed up. And it was super-visual and packed with content. Sometimes it showed up after multiple pages via infinite scroll, but other times it showed up after just the second page of results. And once the Explore feed ended, the SERP ended. You could not find any more listings (in Explore or Search).

Here is what it looks like. This was based on a search for Val Kilmer and Maverick:

OK, @MordyOberstein is right, this is some type of sorcery. :) I triggered the Explore section of the SERP, which Mordy tweeted about yesterday. And it's visual, packed with content, and is at the very end of the SERP for the query. Super-interesting… pic.twitter.com/1gzOQrWswx

— Glenn Gabe (@glenngabe) June 14, 2022

And then I searched for the near-no hitter thrown by the St. Louis Cardinals, which also triggered Explore (but only after the second page of results). So it could trigger after just two pages in the SERPs.

And then Lily Ray chimed in after seeing the examples and asked if this was the merging of Search and Discover. That’s a smart way to think about this.

The merging of Search & Discover?

— Lily Ray 😏 (@lilyraynyc) June 16, 2022

Google’s Grid Treatment – The Grid is the new Carousel
This is also live and not a test. Google is digging the grid format in the SERPs. I explained during my presentation that the grid is the new carousel. You can often see a grid of articles and images when searching for entities. It’s usually a grid of four listings.

The Grid Treatment in Google for articles and images

It’s also sometimes a hybrid grid with images and articles. And tapping the image sends you to Google Images, while tapping the title or description sends you to the article. There’s no label explaining this, and I find that awkward usability-wise. I’m sure others find that confusing as well.

Hybrid grid treatment for images and articles in Google Search

PAA Grid – Yes, there’s even a test of grids running in PAAs!
How much is Google digging the grid? Enough where it’s testing adding a grid in People Also Ask listings. Notice it’s not a carousel… it’s a grid. I think Google has seen enough data to know that people are liking the grid more than a carousel (at least for certain tasks).

Grid treatment test for People Also Ask (PAA)

“Things To Know” SERP Feature
This is a new SERP feature that resembles People Also Ask, but it’s based on Google’s understanding of how people search for a topic. It provides subtopics based on understanding what other people have been searching for (and what those people explore first). It expands to reveal a single listing per dropdown and it’s present on desktop and mobile. And Google has explained that it will eventually be enhanced by MUM to surface even deeper insights.

Google's Things to Know SERP feature

It’s also worth noting that the listings aren’t super visual based on my testing. The favicons use the generic globe icon versus the actual favicons from each site and I didn’t see any visuals in the listings (like images or videos). I’m not sure if that’s intended… or if it’s a bug.

Examples of Google's Things to Know SERP feature

Short Videos vs. Visual Stories (and Web Stories in General)
Ladies and Gentlemen, let’s get ready to RUMBLE… In the Google SERPs, Visual Stories and Short Videos are battling it out. Google started testing a Short Videos SERP feature that resembles the Visual Stories SERP feature. And I’m seeing Short Videos more and more lately versus seeing Visual Stories. And sometimes both are present in the same SERP, with Visual Stories hidden behind a dropdown. Oooh, burn. :)

Short videos versus Visual Stories in the Google SERPs

Short videos are clearly booming (cough, TikTok) and Google understands this. On that note, check out the visibility gains for TikTok based on the May 2022 broad core update. That’s a massive surge in search visibility, which makes complete sense. If you have kids, you know it’s all about TikTok.

TikTok surges during the May 2022 broad core update

This Could Very Well Mean Less Visibility for Web Stories…
It pains me to say that, but I do think that’s the case. Web Stories can be powerful for providing an immersive and rich user experience, but most aren’t doing that. Many are thin and lower quality… yet they still can gain a ton of impressions in Discover (due to the Web Stories carousel in Discover mixed with limited inventory).

Web Stories are also a process to develop (if you are publishing high quality stories). There’s storyboarding, design, development, video editing, and more involved. I can see short videos gaining more and more visibility in the SERPs.

The Versatile Right-side Scrollable Panel in Search (desktop)
This test was surfaced by Brodie Clark and Kushal Bherwani on Twitter (and covered by Barry on Search Engine Roundtable). Google is utilizing the right-side panel much more recently. For example, image packs and images within knowledge panels can surface in a scrollable right-side panel versus opening in Google Images. It’s great usability-wise and enables Google to surface more information in the same SERP.

Google testing a Versatile Right-side Panel in Search

To see it in action, here is Brodie’s tweet with an example:

Big changes could be coming to how images are viewed in Search. We've seen it for free product listings, Google Lens and image packs. Scrollable popup feeds are now being tested for images in Featured Snippets & Knowledge Panels. More info + examples: https://t.co/j6IUtkSa1z pic.twitter.com/qnAx9lY1ax

— Brodie Clark (@brodieseo) June 15, 2022

And it’s not just for images. Google is utilizing the right-side panel this way for free product listings and for Google Lens results. So keep an eye on the right-side panel. I expect more content to show up there.

Google testing showing product results in the right-side panel

Ad Label Tests
This next test impact both SEO and Paid Search. Google is testing a new ad label that spells out “Advertisement” in full versus just “Ad”. There are also some that have seen “Sponsored” tested as well. I think the full “Advertisement” label is great and much clearer for users. “Sponsored”, on the other hand, can be confusing for people in my opinion…

This can impact the click through rate of ads and organic listings, since a clearer ad label can potentially cause higher click through rates for organic listings (if a person is not looking to engage with ads in the SERP). Time will tell what gets rolled out. I hope this test does, though.

Google testing the full word Advertisement in the ad label in Search

Inception for SEO: Favicon Test with domain name and URL changes
When seeing the ad label test, I realized I was in another test. And this one impacted the favicon, domain name, and url in the SERP listing. And I like this change a lot. The favicon is larger, with more white space, and the domain name is up top, with the url below it. I think it’s very clean and works well usability-wise. I hope this gets rolled out too.

Favicon tests in Google search with more white space and domain name and url changes

Summary: What will the next experiment bring?
Only Google knows. :) I find SERP experiments fascinating, since they clearly let us know some of the thinking going on behind the scenes at Google. Some features may never get rolled out, while others might burst on the scene. And some features will cross Google surfaces, like Discover and Search.

Also, if you want to keep up on the latest experiments, I have some recommendations. First, read Search Engine Roundtable. Barry covers a lot of the tests being surfaced.

And second, you should follow SEOs that surface many tests:
Brodie Clark
Brian Freiesleben
Saad AK
Khushal Bherwani
Valentin Pletzer
And me! @glenngabe

It’s also worth mentioning that Brodie maintains a SERP feature timeline containing many tests and changes that are being picking up. And version 2 enables you to filter by SERP feature, date, etc.

I hope this post helped you understand some of the most recent experiments Google is running across Search and Discover (and some of the latest features that have rolled out already!) And keep your eyes peeled… new tests can show up at any time. And make sure to keep that window open if you get caught in a test. Have fun. :)

GG

Back to top>>

Filed Under: google, seo

Analysis of Google’s March 2022 Product Reviews Update (PRU) – Findings and observations from the affiliate front lines

May 2, 2022 By Glenn Gabe Leave a Comment

Google's March 2022 Product Reviews Update (PRU)

Almost four months since the last Product Reviews Update (PRU) rolled out, Google released the third in the PRU series on March 23, 2022. PRUs can cause a lot of volatility for sites with reviews content, and the first two were core update-like for some. With each PRU, Google is looking to continue its evolution with surfacing the highest quality and most insightful reviews content in the search results. And that means thinner, lower-quality posts should drop in rankings as more thorough content rises in the search results. More about that soon.

In this post, I’ll cover several important observations and findings based on the March 2022 Product Reviews Update. I am not going to cover the PRU overall, since I have done that heavily in my first two posts about the April 2021 and December 2021 Product Review updates. Instead, I’ll cover some interesting findings based on analyzing sites impacted by the March PRU (both surges and drops). That includes the types of content potentially helping sites win during the PRU, some lower-quality reviews content slipping through the cracks, more about dueling machine learning algorithms (broad core updates and PRU), the importance of review testing labs, the power of links (or not), and more. I’ll also revisit what I call the Wirecutter Standard with an interesting example of a site employing that strategy that missed the latest PRU cutoff.

Here’s a quick table of contents for those that want to jump around:

  • Periodic refresh still necessary.
  • Linking to multiple sellers.
  • Multimedia (especially video) helping sites, even when not original?
  • Content slipping through the cracks. A potential loophole.
  • Interesting Case: Employing the Wirecutter Approach and missing the PRU cutoff.
  • Watch for intent shifts. It could be Google, and not your content.
  • Dueling machine learning algorithms (again), and surfing the gray area.
  • Ignore user feedback at your own peril.
  • Testing Labs: Follow the leader and how review testing labs will continue to expand.
  • First-hand testing by reviewers. Is it necessary?
  • The Power of Links: inconsistent findings (again).
  • Key takeaways for site owners and affiliate marketers.

Reminder: PRUs Still Require A Periodic Refresh:
Regarding seeing changes over time, the PRU still requires a periodic refresh (as you can see via the massive swings in visibility during each rollout). So, Google still needs to “push a button” and roll out the update. So far, that’s been separated by a number of months (eight months in between the April and December PRUs and then almost four months in between the December and March PRUs). Just keep this in mind while working on remediation. You will need another PRU to roll out to see significant improvement (if you have been negatively impacted by a previous Product Reviews Update). I’ll cover more about dueling machine learning algorithms and the future of the PRU later in this post.

For example, I asked Google’s Danny Sullivan about the type of rollout when the first PRU launched in April of 2021:

At the moment, there's a periodic refresh. Unlike with core updates, we might not always post when a refresh happens given the more limited nature of content involved here. So overall, sites should consider the advice & keep working to it (true of core updates as well!).

— Danny Sullivan (@dannysullivan) April 9, 2021

Linking to multiple sellers: Not included in the algorithm yet, but showing up more and more.
With the December Product Reviews update, Google explained that sites should consider providing links to more than one retailer to purchase products. That surprised many affiliate marketers since Amazon is the dominant e-commerce retailer benefiting from affiliate links (and it’s actually against Amazon’s TOS to link to other retailers when using data via its API.)  

Google explained it was just a recommendation and not being used algorithmically in the PRU (yet), but that definitely was a shot across the bow of Amazon. Well, the March PRU rolled out and I didn’t see any mention of that factor being enforced. So, I pinged Google’s Alan Kent to learn more. Alan explained that Google was still not enforcing that aspect at the moment.

Hi Glenn. The update is an improvement of current algorithms. There is no special support for multiple sellers in this update.

— Alan Kent (@akent99) March 23, 2022

That’s good to know, but my recommendation is to link to more than one seller, if possible (to future-proof your site), but it’s not a requirement as of now. While analyzing the March PRU, I noticed many more affiliate marketers are indeed linking to multiple sellers, when possible. In the past, I saw many reviews linking to just Amazon. That has definitely changed based on the sites I’ve been analyzing and I’m sure Amazon is watching closely. That type of change could dilute their affiliate revenue a bit (as affiliate sites start linking to other retailers from their reviews content). We’ll see how this plays out…

For example, a site linking to two sellers from reviews content:

Reviews sites linking to multiple sellers

Here is another review linking to multiple sellers (four in this example):

A review site linking to multiple sellers to buy products

Video: A picture is worth a thousand words. And video can be worth ten thousand.
As part of Google’s best practices, they explained to “provide evidence such as visuals, audio, or other links of your own experience with the product, to support your expertise and reinforce the authenticity of your review.” And in my post about the April Product Reviews Update, I explained how original images, video, and gifs could help readers get a much better feel for a product.

Google's best practices for reviews sites regarding video and images.

Well, I’ve noticed an interesting trend while analyzing sites impacted by the PRU. I’m seeing much more video embedded in the articles. I think that’s great, but the devil is the in details. And this could be a weird loophole.

For example, if you produced an original video based on reviewing a product, that’s outstanding. But what if you didn’t shoot a video and simply embedded a video of the product from another creator, manufacturer, etc.? I’m seeing that technique used often while analyzing reviews and I think that could be a short-lived benefit.

If you are an affiliate marketer using video in your review articles, I would take a hard look at those videos and determine if they are truly helpful and if they reinforce your first-hand use of those products. Also, and this is just my opinion, but having original video is more powerful than leveraging someone else’s video. Actually, any site can embed the same exact video in their own review articles.

I know high quality video is not easy to produce, but it can really set your reviews apart from the competition. And if Google can figure out what’s truly original and insightful from a video standpoint, then having your own videos could only help (as long as they are high quality, insightful, and valuable for readers).

For example, here is an original video embedded in a review:

Review site with original video content.


A PRU loophole? Low-quality lists of products ranking well for some queries.
With previous Product Reviews Updates, I noticed some loopholes. There were some sites ranking with a very basic format (no review content actually). Although that specific loophole seemed to be closed leading up to the March PRU, I came across other examples of sites ranking with thin or low-quality reviews content. Actually, they weren’t really reviews. Instead, there was basically just a list of “best products” with minimal content, and those pages are ranking well for various review queries.

I can’t imagine this will stay as-is. I’m sure Google will pick up on this, refine the Product Reviews algorithm and handle accordingly. Whether that requires another Product Reviews Update, or if it happens before then, I expect those pages to sink in the rankings over time. If I were running those sites, I would definitely look to improve the pages that are ranking well now. They are far from the Wirecutter Standard, which is what I recommended trying to achieve in my previous posts about the PRUs. That’s a good segue to an interesting case I’ve been working on, which I’ll cover next..

First, here are two examples of urls surging with the March PRU that contain low-quality review content. Actually, it’s not even review content, it’s more like a list of visuals and links. Notice how they surge out of nowhere during the March PRU.

Loophole with Google's Product Reviews Update.
Page suring with Google's Product Reviews Update with low quality content.

Interesting Case: Employing a Wirecutter Approach but missing the cutoff:
Just like with broad core updates, sites should look to improve their reviews content significantly, and over the long-term. Google is using machine learning to evaluate sites and content over time, so quick and small changes will not suffice. Taking Google’s best practices to heart and implementing big changes across your reviews content is the way to get out of the gray area. That’s why I have recommended taking a Wirecutter approach to producing reviews content. You can read more about that in my previous posts, but publishing killer content, based on extensive first-hand testing and use, supported by original visuals and multimedia content, is a very strong approach to employ.

But… it’s not easy. It takes an enormous amount of time, energy, resources, money, etc.

Well, I’m helping a client that got hammered by the April PRU and then saw a partial recovery with the December PRU, that took my comments about employing a Wirecutter approach to heart. After analyzing the site, the content, user experience, etc., we spoke a lot about the Wirecutter Standard, and the site owner was all in. Over the past few months, they have mapped out their testing process, targeted certain categories for using a Wirecutter approach, and have already published a number of review articles based on that process.

And those are killer pieces of content.

Although produced by a small team, the new content is outstanding, provides a wealth of insightful and helpful information about the products being reviewed, provides their own rating system based on the areas being reviewed, they have original photos, gifs, and video that support the content, and more.

But for the March 2022, they missed the “cutoff”. The content was published right before the March Product Reviews Update rolled out. Therefore, those new killer articles weren’t going to help much when the March PRU rolled out.

On that note, Google is on record that recent changes aren’t reflected in major algorithm updates. Google needs to see significant changes over time, and over the long-term. The site just didn’t have the time…

Via @johnmu: Major impact from an algo update wouldn't be from *recent* changes. For larger sites, it can take Google's algorithms a longer time to adjust to site changes. It could take several months to recrawl, reindex, & reprocess the site changes: https://t.co/p0VbFtfOO7 pic.twitter.com/Nrpiety72k

— Glenn Gabe (@glenngabe) May 8, 2018

And here is Google’s John Mueller explaining a similar situation in a recent hangout video (when asked if a recent change could have led to a drop from a broad core update). John explained that the information used for broad core updates is collected over the long-term. And the same applies to an update like the Product Reviews Update (which is using machine learning when evaluating content and sites):

The timing was unfortunate for my client, but we are super-excited to see the next PRU roll out. I’ll post more information about how that goes after the next Product Reviews Update. If my client keeps on publishing Wirecutter-like content, then I would imagine they will see nice gains. We’ll see.  

Testing Labs: Follow the leader and how review testing labs will continue to expand.
Regarding “testing labs”, I’ve already covered Wirecutter heavily in my other posts about the PRU, but it’s worth mentioning that Good Housekeeping and Verywell also have their own testing labs. You can check out more information about those efforts by following the links below, but if you are producing reviews content, then I highly recommend trying to emulate what those companies are doing.

I know it’s not easy to do, but it can help future-proof your reviews content. The more you can map out a detailed review process, the more you will organically cover what Google’s algorithms are looking for. For example mapping out a ratings scale, providing pros and cons, actually testing out products (first-hand experience), producing visuals that support the testing (photos, videos, gifs, etc.), so on and so forth.

Wirecutter: https://www.nytimes.com/wirecutter/blog/anatomy-of-a-guide/
Good Housekeeping Institute: https://www.goodhousekeeping.com/institute/about-the-institute/a19748212/good-housekeeping-institute-product-reviews/
Verywell Testing Lab: https://www.verywellfit.com/commerce-guidelines-and-mission-4158702

The Good Housekeeping Institute:

Good Housekeeping Institute

Wirecutter: The New York Times

Wirecutter Reviews by the New York Times

The Verywell Testing Lab:

The Verywell Testing Lab


Do you need to test each product you are reviewing? Is first-hand use and experience required?
Over the past several months, I’ve received questions from site owners about the importance of first-hand testing of products and how necessary that is moving forward (since some products are not easy to test or consume). For example, when the December 2021 Product Reviews Update rolled out, Google explained that “users have told us that they trust reviews with evidence of products actually being tested…” And they included a new best practice for site owners explaining just that.

Google's best practice about first-hand use and testing for product reviews.

But for some products or services, it’s not easy (or even possible sometimes) to actually test a product, consume a product, or use a service in order to gain first-hand knowledge of how they work. Given those challenges, what does Google say about the situation? Well, Google’s Alan Kent has provided more information via Twitter and I wanted to include that information below.

Alan explains that it’s not always necessary to test or consume a product in order to write a high quality review. But he does warn that site owners and affiliate marketers should not just spin a description from a manufacturer as the core review content.

He said don’t expect a big boost if you simply say you tested it yourself and basically paraphrase the manufacturer description. And in another tweet, Alan explained to think about how you can add to the current body of knowledge for a given product (while avoiding simply providing the specs for a product that’s supplied by the manufacturer).

Here are Alan’s tweets. The first was in response to a question about supplements (and if the people reviewing the supplements were required to have tried the actual products). Alan says no.

You can certainly create a useful review without eating the product. E.g. people know too much sugar is not good for you. But dont expect big boosts if the review only adds a few sentences saying "I tested it myself too" with the rest paraphrasing the original product description

— Alan Kent (@akent99) April 13, 2022

And the second tweet from Alan was in response to an observation that some sites are claiming to have a product testing lab, but a number of reviews don’t explain that the products were actually tested. Alan explained that contributing new information to the body of knowledge about a product would be smart, but just repeating specs from the manufacturer website doesn’t really add any value.

Another way to think about it is does the review contribute new information to the body of knowledge about the product? I could test a car tire using a machine instead of on my own car. But just repeating the specs from the tire website with different words adds nothing.

— Alan Kent (@akent99) May 5, 2022

My take on first-hand testing:
If you are going to thoroughly review a product, it’s a wise idea to actually test and use that product. Doing so can give you a much stronger understanding of how the actual product works, which can yield a much stronger review. It can also yield original photos and video of you testing the product, which can be extremely helpful for readers.

But for products or services you can’t easily test out yourself, then provide as much unique information as you can without simply spinning information that can be found elsewhere. Like Alan explained, see what you can add to the current body of knowledge for a product. Add as much value as you can for the reader.


A quick note about intent shifts. It’s not you, it’s Google.
In my post about the December Product Reviews Update, I mentioned that there were some intent shifts going on where e-commerce retailers started ranking for reviews content, and review sites dropped to page two or beyond. And on the flip side, sometimes when e-commerce retailers were ranking well, then an intent shift happened and reviews content started to rank higher (pushing the e-commerce retailers lower).

This was typically happening with head terms (so queries lacking “best”, “reviews” or “compare”). Well, we saw that again with the March PRU. The reason I bring this up is because sometimes it’s not your content that’s the problem. It could just be an intent shift, which you have no control over. I covered that in my post about the difference between relevancy adjustments, intent shifts, and overall site quality problems.

So, if you see a drop during the PRU, definitely run a delta report and determine the root cause of the drop. And if it’s an intent shift, you might not need to radically improve your content (if it’s already high quality, insightful, valuable, etc.)

Here is an example of an intent shift happening with the December Product Reviews Update and then reversing with the March PRU. The site had no control over this…

Intent shifts during Google's Product Reviews Update

Google’s dueling machine learning algorithms are… still dueling: And this needs to be addressed (IMO).
In my post about the December Product Reviews Update, I mentioned dueling machine learning algorithms and how that’s a problem for Google. That’s where sites either surged or dropped during broad core updates, and then saw the opposite movement with a Product Reviews Update.

Well, I saw more of that with the March Product Reviews Update. Sites that were impacted in June, July or November with broad core updates saw the opposite movement with the March PRU.

With that happening, Google is sending serious mixed signals to site owners. For example, is the site’s content high quality, or not? Only Google’s machine learning systems know. Muahahaha. :)

Dueling machine learning algorithms with Google's broad core updates and Product Reviews Updates

It’s also a good time to reiterate that Google is using machine learning with both broad core updates and the Product Reviews Update, so it’s not like they are using 10, 20, or even 100 factors. Google could be sending many more signals to the machine learning system and then letting the system determine weighting (and ultimately rankings).

Again, welcome to SEO. Bing has explained more about that in the past. Here is Fabrice Canel on how Bing is using machine learning with its core ranking algorithm. They send “thousands of signals to the machine learning system and it determines the weighting”. This is ultra-important to understand. I linked to the video from my tweet below.

How much does a certain factor matter for SEO? Via Bing's @facan We simply don't know. Bing is heavily using machine learning. We don't set the weighting. It's about sending thousands of features to the ML system & the system figures it out: (at 35:02) https://t.co/EiTktEFqx7 pic.twitter.com/HTzu9wkA5m

— Glenn Gabe (@glenngabe) November 9, 2020

Also, I do believe the Product Reviews Update will be incorporated into Google’s core ranking algorithm at some point (and that will be a good thing). In my opinion, you can’t have a major algorithm update focused on quality impact a site one way and then another algorithm update focused on quality reviews impact the site in the opposite way. That’s maddening for site owners and makes no sense. But before that happens, Google needs to expand the PRU to other languages beyond English. That hasn’t happened yet, so I believe that will happen first and then maybe the PRU gets baked into Google’s core ranking algorithm. Again, we’ll see.

Google's Product Reviews Update and expanding to other languages.

Keep your eyes peeled. Ignore user feedback at your own peril.
I’ve covered the power of user studies before (especially with regard to Google’s broad core updates). It can be incredibly powerful to hear directly from objective users as they browse your site, consume your content, etc. But sometimes you can gain some of that feedback without even running a user study.

For example, I was analyzing one site that was negatively impacted during the March PRU that had user comments on each review page. Well, the comments can be telling… I found several comments on articles hammering the quality of reviews or questioning the expertise of the authors.

For example, “the reviewer clearly doesn’t know what they are talking about”, “how about updating the article”, and more.

Here is an example of what that looked like. The image has been slightly edited to protect the innocent. :)

User comments as feedback for review site owners.

That is incredible feedback for the site owner and they should take it to heart. Most users will not spend the time to post a comment like that, so it must be really bad if they are leaving those comments. And I’m not saying Google is using those comments directly when evaluating reviews (although it could absolutely be one of the many signals that are being sent to the machine learning system). But if I was the site owner, I would take that feedback to heart and figure out what needs to be updated. Then move as quickly as possible to improve the content. And maybe running a full-blown user study would be a smart next step.

Links. Still not the end-all for the Product Reviews Update.
In my posts about the April and December 2021 Product Reviews Updates, I explained how links were not the end all. For example, some sites surging had weaker link profiles overall and some sites dropping had stronger link profiles. Basically, there was not a clear connection between the strength of the link profile and how the site performed with the Product Reviews Update. Again, that could be the impact of a machine learning system that takes many signals into account and determines weighting.

So has that changed with the March PRU?

Not really. I’m still not seeing a major connection between link profile strength and how review sites are performing during PRUs. Sure, some powerful sites are surging, but is it because of their link profile? There are plenty of examples of the opposite… For example, sites with much weaker link profiles surging as well. Anyway, it’s just worth noting since it’s clear that Google’s machine learning-based PRU algorithm is using many signals (and many of those signals seem more focused on the quality of content).

Here are two examples of sites surging during the March PRU with weaker link profiles:

Site with weak link profile surging during the March 2022 Product Reviews Update.
Site with weaker link profile surging during the March 2022 Product Reviews Update.

And here are two sites dropping during the March PRU with stronger link profiles:

Site with strong link profile dropping during the March 2022 Product Reviews Update.
Site with strong link profile dropping during the March 2022 Product Reviews Update.

Key takeaways and tips for affiliate marketers and site owners:

  • Internalize Google’s best practices: Read Google’s best practices and take them to heart. Internalize them and then form a plan of attack for improving your reviews content.
  • Run a user study: User studies are absolute gold for SEO. Leverage Google’s best practices for product reviews and craft tasks and questions. Then use a strong platform for user testing (like usertesting.com). Gain feedback, watch video, and listen to users. The results can be enlightening.
  • Strive to be the Wirecutter of your niche: As I mentioned in my previous posts about the Product Reviews Update, work to become the Wirecutter or Good Housekeeping Institute for your niche. Yes, it’s challenging to do that, but it can pay huge dividends down the line.
  • Give readers multiple buying options: Link to multiple sellers for purchasing products (beyond just Amazon). It’s a best practice from Google… even if they say it’s not being enforced (yet). It’s a smart way to future-proof your reviews content (and protect from subsequent negative PRU impact).
  • Invest in visuals: Provide original photography, video, and gifs supporting your reviews content. It’s a great way to provide users with a killer view of the products you are covering while also showing users how you actually tested the products. Google has explained it’s looking for these things (it’s a best practice), and it can set you apart from the crowd. You can also repurpose that multimedia content for use on social media (like YouTube, Tiktok, Instagram, etc.) It’s a win-win.

Summary: The PRU continues to evolve.
Google’s March 2022 Product Reviews Update was another powerful update for affiliate marketers. It was the third in the series, and we can expect more as the PRU continues to evolve. Like broad core updates, Product Reviews Updates roll out just a few times per year. Therefore, if you have been negatively impacted by the latest PRU, then I highly recommend forming a strong plan of attack. The more you can significantly improve your reviews content, and over the long-term, the better position you can be in when the next PRU rolls out. Good luck.

GG

Back to the top>>

Filed Under: algorithm-updates, google, seo

How NewsGuard’s nutritional labels can help publishers avoid manual actions for medical content violations (Google News and Discover)

April 15, 2022 By Glenn Gabe Leave a Comment

In July of 2021, Google issued a number of warnings for sites publishing medical content that went against its guidelines (for Google News and Discover). The potential for a manual action was clear and some publishers scrambled to figure out what to do.

I mentioned this on Twitter in September:

Google adds information to help docs about displaying Discover Manual Actions in GSC

I've seen several examples of Discover policy violation warnings since early July. Will manual actions follow soon? Time will tell. :) https://t.co/huFckYCTr8 via @tldrMarketing pic.twitter.com/kCKImHjnhC

— Glenn Gabe (@glenngabe) September 2, 2021

And six months from the warnings, manual actions arrived for sites that hadn’t cleaned up the problem. Here is my tweet from January when Google issued the manual actions:

Heads-up. Don't ignore Discover & Google News policy warnings in GSC. It might take 6 months or a year, but a manual action could follow. Had multiple publishers reach out this weekend about manual actions for Discover/Google News. E.g. misleading content, medical content, etc. pic.twitter.com/JutaP82HQL

— Glenn Gabe (@glenngabe) January 30, 2022

To clarify, these were manual actions for Google News and Discover, and not Search. And for the publishers receiving manual actions for medical content, the medical policy for News and Discover states that Google “doesn’t allow publishing medical content that contradicts or runs contrary to scientific or medical consensus and evidence-based best practices.”

And the manual actions in Google Search Console explained the following:

“Your site appears to violate our medical content policy and contains content primarily aimed at providing medical advice, diagnosis, or treatment for commercial purposes. Nor do we allow content from any site that contradicts or runs contrary to scientific or medical consensus and evidence-based best practices.”

So, if you are publishing medical content, and receive a manual action for violating that policy, News and Discover visibility can be negatively impacted. Again, Search should not be impacted by the manual action, but Google News and Discover visibility could decline.

For example, here is the Discover performance for one of the flagged articles for a publisher that received a manual action:

Google Discover performance for a page impacted by a manual action for medical content.

When digging into the articles being flagged by Google, it was super-interesting to see the connection between NewsGuard ratings and the organizations that were covered heavily in the articles. Below, I’ll cover more about NewsGuard and how it could be helpful for sites publishing health and medical content.

Interesting cases and the connection between flagged content and NewsGuard ratings:
In 2018, I wrote a post covering NewsGuard, which I called a proxy for Google’s quality raters. NewsGuard has a team of analysts (trained journalists) that review websites based on nine journalistic criteria, including credibility, transparency, and trust. They originally started by focusing on news organizations, but they have expanded to health and medical as well. For example, there is now a HealthGuard service that, “helps patients, healthcare workers, and anyone involved in the medical field identify trustworthy sources of health information — and avoid dangerous misinformation.”

Once a site is reviewed, NewsGuard produces a “nutritional label” rating the site, which can also appear in the search results if you are using its Chrome plugin. In addition, NewsGuard has relationships with a number of organizations (in several capacities). For example, Bing, Facebook, the American Federation of Teachers (AFT), the World Health Organization (WHO), and others have partnered with NewsGuard to fight disinformation. You can read more about their various partnerships on the site.

Although NewsGuard does have partnerships with several organizations for helping fight misinformation and disinformation, I want to be clear that Google does not use NewsGuard data in its algorithms. But like I explained in my first post, those ratings sometimes line up with how the sites perform in organic search (since Google is also trying to algorithmically surface the highest quality and most authoritative content on the web).

It’s important to understand that Google is on record explaining that its algorithms can be more critical when it comes to health and medical content. Here is a Twitter thread of mine that expands on that point. Again, this is super-important to understand for anyone delving into health and medical content.

Run a health/medical e-commerce site? Via @johnmu: Our algorithms are more critical for health/medical topics, so def. keep E-A-T in mind. Make sure the site represents a very high standard. i.e. High-quality content created by actual medical professionals https://t.co/aiMrdN9Hl7 pic.twitter.com/Nuz3K7Pi6o

— Glenn Gabe (@glenngabe) March 27, 2021

For example, here is a health site with a horrible nutritional label from NewsGuard. The site has gotten hammered during broad core updates over time. Again, it’s not because of NewsGuard… it’s just interesting how they line up:

Health and medical site that dropped over time during Google's broad core udpates.

Cross-referencing organizations via NewsGuard based on manual actions for medical content:
For organizations receiving manual actions for medical content (News and Discover), I was interested in cross-referencing NewsGuard to see what the nutritional labels looked like for the organizations being covered (and promoted) in those flagged articles.

And to clarify, it’s not about simply mentioning sketchy organizations that would get content flagged. It’s more about the core of the article being about those organizations (including promoting their views). That’s exactly what the articles were doing that were flagged.

So what did the nutritional labels look like for those organizations being covered? They weren’t good. Not good at all… Here are two examples based on content getting flagged.

Here’s the first site’s label:

NewsGuard nutritional label with extremely poor ratings for a health and medical site.

And here’s the second site’s label:

NewsGuard nutritional label with poor ratings for a health and medical site.

And here is what one of the sites look like in the search results (when you are using the NewsGuard Chrome extension):

NewsGuard rating in the search results for a site with poor ratings.

When you hover over the NewsGuard icon (the red shield), you can view an overlay with more details. And that overlay contains a link to the full nutritional label on the NewsGuard website.

NewsGuard overlay with more information from a site's nutritional label.

When you visit the nutritional label on NewsGuard’s website, you can find all of the details about why the site received those ratings (and by category). And that includes all of the sources that were cited and referenced in their findings. For example, you can view CNN’s nutritional label here (just to get a feel for what one looks like, review the ratings by category, the sources section at the end, etc.)

Note, the site I mentioned that received the manual action is a large-scale publisher with millions of pages indexed, so most of the content would not fall into this category (covering organizations and views that go against Google’s guidelines). But, they do have some… and they were flagged by Google.

When discussing this situation with the site’s leadership, I explained that having some checks in place would be smart for understanding the risks involved with publishing certain pieces of content. And in my opinion, NewsGuard could be one of those checks.

Utilizing NewsGuard as a check during the publishing process:
So, if you are a site publishing health and medical content, then I would definitely put some checks in place to ensure you don’t receive a manual action for medical content. One solid approach could be adding checks using the NewsGuard plugin (which links to the nutritional labels). If you see red all over the label, you might want to be more cautious (or at least dig in further to learn more about that organization’s views).

For example, if the publisher I’m covering in this post that received the manual action checked NewsGuard before publishing that content, then they probably wouldn’t have published it at all (as long as they understood Google’s policies around medical content for News and Discover). Again, it’s a large-scale publisher with millions of pages indexed. A NewsGuard check could have raised red flags during the editing process.

Note, NewsGuard obviously doesn’t have labels for every site on the web, but understanding the ratings based on the organizations that have been reviewed is a good idea. Again, it was interesting to see the connection between some manual actions for medical content and the sketchy nutritional labels for those organizations being promoted in those articles. Like I explained in my original post about NewsGuard, it’s like a proxy for Google’s quality raters. So in my opinion, it’s smart to check those nutritional labels before publishing.

GG

Filed Under: algorithm-updates, google, manual-actions, seo

What Discover’s “More Recommendations”, Journeys in Chrome, and MUM mean for the future of Google Search

March 18, 2022 By Glenn Gabe Leave a Comment

How Google is building a true Search assistant that can provide a wealth of recommendations based on the topics users are searching for. And it’s partially live now in Google Discover with a focus on products.

Google providing shopping recommendations in a new Task Dashboard.

I’m a heavy Google Discover user and have covered many of the features rolling out, or being tested by Google there, over the years. Discover is Google’s feed of content that’s tailored for each user based on their activity across Google’s ecosystem. Over 800M people use Discover each month (and that’s an old statistic from 2018, so the actual number of users is probably much higher at this point).

As I browse Discover, I often come across new features that Google is testing, which can sometimes show you the direction Google wants to move towards functionality-wise… Over the past few weeks, I’ve seen a very interesting feature, that when combined with some of the other advancements going in Search, made me think Google could be foreshadowing the future of Search.

That’s a bold statement, but hear me out… In this post, I’ll combine a few different developments in Search to support what I’m saying. And if you’re a site owner publishing content to help people that are researching topics, you should definitely pay attention to what Google is doing here. These advancements could very well impact what you do and the results you see.

“More Recommendations” in Discover:
Like many people, I’m often researching new products or services on Google. You might start with a broader query, dig in further, refine your query to be more specific, visit multiple sites, check out YouTube videos covering the topic, and more. Other than bookmarking certain pages, you really don’t have a good trail left behind based on your research…

So, you might try and find those articles again, visit Google and YouTube multiple times again, and retrace your path. That’s not the most efficient or powerful way to research topics or products. And I know Google understands this (based on announcements about MUM and Journeys, which I’ll cover later in this post).

Well, I was checking my Discover feed a few weeks ago and saw an interesting call to action under one card in my feed. It said, “More recommendations” and that’s not something I had seen before. When tapping that link, I was whisked to an immersive interface labeled “Task Dashboard” which contained a boatload of information, links, recommendations, videos, and comparison functionality based on the product or service I was searching for.

My immediate reaction was holy cow, is this the future of Search?? And by “future”, I mean the next few years. The real future of Search if much more immersive, intimate, and ambient in my opinion… but I’ll cover that at a later time.

The best way to explain what I’m seeing consistently now is to just show you. Here is what the process and screenshots look like for my research of Costa sunglasses (my favorite brand of sunglasses).

First, here is a card in my Discover feed with a “More recommendations” call to action under an article about Costa sunglasses:

More recommendations in Google Discover for products.

When tapping that button, it takes me to a “Task Dashboard” with a boatload of information, videos, recommendations, comparison functionality, etc.

Discover's Task Dashboard containing articles, videos, and comparison functionality for products recently researched.

As you can see, Google is becoming like a true shopping assistant with everything it’s providing on the page. I have articles I’ve visited, other articles that might be helpful, videos that might be helpful, a People Also Ask module, comparison features that take me to a new SERP comparing the products I selected, and more. Seriously, try and find this on your own and spend some time playing around. I think you’ll be blown away.

Here is the task dashboard containing a “continue browsing” section:

Continue browsing and similar products in Discover's task dashboard.

And beyond that, I found a comparison feature which lets me tap several types of Costa sunglasses and then visit a Google SERP that compares them. And yes, read that again, A GOOGLE SERP that compares them (and not an article comparing them). If you’re an affiliate marketer comparing the top products in a niche, that should definitely catch your attention.

First, here is the comparison functionality in my “Task Dashboard”:

Compare product functionality in Discover's task dashboard.

And once I select two products, I’m taken to a fresh SERP comparing the two (with data from Google’s Shopping Graph).

Google search results comparing products based on Google's Shopping Graph.

The Connection To Collections in the Google App:
If some of this looks familiar, and you are one of the people using Collections in the Google Search app, then that’s because Collections provide a similar experience. The title label is different, but the various parts of the collection are similar.

Google Collections with similar functionality to the task dashboard in Discover.

So, Google is expanding on collections functionality and tying it more to helping users with their research journey. It’s a smart approach and I did find it helpful. Regarding helping with a user’s “journey”, that’s a good segue to Journeys in Chrome.

Journeys in Google Chrome:
Last year, Google started testing Journeys in Chrome Canary. It’s a way to help you resume your research of a given topic. By visiting your history in Chrome, you can find a new tab for “Journeys”, which lists the sites and articles you visited based on searching for a certain topic. It doesn’t provide a ton of functionality but is helpful for picking back up where you left off.

Here is a screenshot of Journeys (provided by Google in their blog post about the feature):

Google Journeys in Chrome.

In addition, Google recently announced that Journeys is rolling out to the stable version of Chrome for desktop. Note, I have not seen this officially roll out yet… so we are still waiting. Regarding how it will work in the stable version of Chrome, Google explained you might see a prompt when searching on desktop to “Resume your research” which will then connect with Journeys to help you continue researching a topic.

Connecting Discover’s “More recommendations” with Journeys:
By this point in the post, you might see where I’m going with this… The Discover functionality for “More recommendations” which takes you a killer page where Google assists you with your research could be how Journeys potentially work in Search. Imagine searching Google and seeing a call to action in the search results to “Resume your research”, which then takes you a page like the one I showed you from Discover containing a boatload of helpful information, links, videos, comparison functionality, and more.

Yes, I think this is where we might be headed (and soon). And beyond what I covered, what if Google added a powerful new technology that could understand and recommend even more content related to your research? That’s where MUM could come in handy.

Adding MUM to the equation:
At ‘Search On 2021’, Google covered its new technology called MUM (or Multitask Unified Model), which Google explained is 1000X more powerful than BERT (another algorithm that helps Google understand content and queries). Using MUM, Google can understand your query much deeper and return different types of content based on your query. It’s also trained on 75 different languages and can generate content versus just understanding it.

Google MUM can fuel shopping recommendations.

As an example of how MUM might be used for what I’m covering in this post, if you were asking “how to prepare for deep sea fishing”, Google could provide information about the best ways to prepare for the trip, but also provide product recommendations based on the conditions. For example, maybe the recommendations page contains information about gear (like sunglasses). And Google might provide the best Costa sunglasses for deep sea fishing (taking various things into account like strong wind, salty air, and the need for sunglasses to remain secure while fishing). I’m just riffing here, but you get the picture. It would be like a deep sea fishing guide was helping you…

Google has explained recently that MUM is not being used in Search for rankings yet, but that it looks forward to providing more intuitive ways to Search in the near future (especially with Lens). We are seeing more features roll out based on ‘Search On 2021’ like “Broaden this search” and “Refine this search”, so I’m sure MUM is not far behind (with regard to helping with scenarios like this).

I won’t go too deep about MUM in this post, but I think it could be extremely powerful for providing a wealth of information for users as they research topics (and what could show up in the “More recommendations” results I covered earlier). And MUM could also help with Journeys as it provides related content (across formats) based on your previous research.

What this means for site owners and SEOs:
I think this could be both exciting, and scary, for site owners and SEOs. On the one hand, there might be more opportunities for you to rank across surfaces. This new “recommendations flow” that Google is presenting provides many different types of content organized in the “More recommendations” page. So you could have your articles, review pages, videos, web stories, and more show up there. That’s great, but it can also detract from the core search results (where you might be ranking well now).

For example, let’s say you ranked #3 for “how to prepare for deep sea fishing” but Google provided some type of call to action to “research more recommendations” or something like that. And that call to action sends users to an immersive “More recommendations” page like what I’m seeing in Discover. And that page has many links to different pieces of content, including articles, videos, People Also Ask, related searches, and even comparison functionality that triggers a fresh SERP comparing the products you selected. It can become a crowded field for sure… providing many more options for users than what they see now in a standard SERP.

A mockup of what a “More recommendations” SERP feature could look like in the Google search results:

Mockup of what the recommendations feature could look like in the Google Search Results.

Moving forward: Google is becoming more like a true shopping assistant.
I’m sure we’ll see Google test this type of functionality heavily before officially rolling out anything in the search results, but in my opinion it’s super important for site owners to understand this is going on now in Google Discover. And Journeys is rolling out soon in Chrome for desktop (with a prompt from the SERPs). In addition, MUM will be used soon in Search for ranking (and in several ways). And… the combination of all of that could be a powerful new experience that tries to help users research topics more thoroughly.

My recommendation, pun intended, is to start researching how this is working now, figure out if you have content (across types) that can be presented in this new format, and identify gaps. Then fill those gaps by creating a content plan addressing your site’s vulnerabilities.

I’ll keep a close eye on this and share what I’m seeing on Twitter (or maybe even in additional blog posts). Until then, I would visit Discover and pay close attention to the details there… You might just see the future of Search. :)

GG

Filed Under: google, seo

How to extend a multi-site indexing monitoring system to compare Google-selected and user-selected canonical urls (via the URL Inspection API and Analytics Edge)

March 16, 2022 By Glenn Gabe Leave a Comment

Last month I published an article on Search Engine Land explaining how to use the new URL inspection API to build a multi-site indexing monitoring system. By using Analytics Edge in Excel with the new URL Inspection API from Google, you can check the indexing status for the most important urls across multiple sites on a regular basis (and all by just clicking a button in Excel). It’s a great approach and can help you nip indexing problems in the bud. Remember, if your pages aren’t indexed, they clearly can’t rank. So monitoring indexing is super important for site owners and SEOs.

After I published the article, it was great to see people in the industry test out this approach, and I’ve heard from quite a few that they use it on a regular basis. That’s outstanding, but I think systems like what I originally built can always be enhanced… As I was using the system to check indexing levels across various client sites, I came up with a simple, but powerful, idea for extending the system. And it relates to canonicalization.

First, it’s important to understand that rel canonical is just a hint for Google. I’ve covered that before in case studies, other blog posts, and heavily on Twitter over the years. Google can definitely ignore what site owners include as the canonical url and then choose a different urls (based on a number of factors). And when Google selects a different url as the canonical, you definitely want to know about that. That’s because the url being canonicalized will not be indexed (and won’t rank in the search results). This can be fine, or not fine, depending on the situation. But you definitely want to dig in to see why Google is choosing a different canonical than what you selected.

Luckily, the URL Inspection API returns both the user-selected canonical and the Google-selected canonical when inspecting urls. So, via some Analytics Edge magic, we can compare the two columns returned by the API as the urls are being processed, and flag that in our worksheets. It’s just another level of insight that can help you address indexing problems across the sites you are monitoring.

What we are going to achieve: Comparing canonicals via the URL Inspection API.
As I explained above, we are going to add another step in the indexing monitoring system to compare the user-selected canonical with the Googles-selected canonical. And we are going to dynamically create a new column in each worksheet that lets us know if there is a difference between the two. 

And as a quick reminder, we will be doing this across all sites that are included in our indexing monitoring system (which can span as many GSC properties as you want). If you followed my original tutorial, then you can easily add this additional step in your system to check canonicalization across your top urls. And if you didn’t already set up an indexing monitoring system, then I would do that first and then come back to add this step.

With that out of the way, let’s enhance our system!

How to extend an indexing monitoring system by comparing Google-selected and user-selected canonicals:

1. Set up the foundational indexing monitoring system:
First, follow my original tutorial for setting up the indexing monitoring system. Once you have that up and running, we are going to add an additional step for comparing the user-selected and Google-selected canonical urls. And then we’ll dynamically create a new column in each worksheet called “Different Canonical” that flags if they are different.

2. Add a step to the macro in Analytics Edge:
In order to add another step to our macro in Analytics Edge, you simply run the macro to the point where the new instruction will be added and then add the new functionality. You can accomplish that via the “Step” button in the task pane. First, open your spreadsheet, click the Analytics Edge tab, and open the task pane (which holds your macros).

3. “Step” to your desired location in the macro:
Click the instruction in the task pane BEFORE where you want to add the new function. Since we are going to compare data after the API returns results, we will add our new function after the “Index Inspection” step in our macro. So click “Index Inspection” in the task pane and then click the step button (which is located next to the run button). After the macro executes to that point, you can add additional functionality to the macro. For our purposes, we are going to add a Formula function that will compare columns after the API returns results for each url.

Note, this will only run the macro that’s showing in the task pane. It will not refresh ALL macros in the spreadsheet. So if you are monitoring several sites, and each site has its own macro, then those will need to be updated separately. I’ll cover how to do that later in the tutorial.

4. Add a new formula for comparing canonicals:
Once the macro runs to the point we indicated in the previous step, Analytics Edge will stop running the macro. And then you can add the new function for comparing the Google-selected and user-selected canonical urls. To do that, click the Analytics Edge tab, and then click the Column dropdown, and select “Formula” from the dropdown list.

5. Add the conditional statement in the formula dialog box:
In the formula window, enter a name for the new column you want to add based on the formula we will create. You can use “Different Canonical” for this tutorial. Next, select where the column should be added in our worksheet. I want to put the new column right after the userCanonical column in the worksheet (which makes the most sense in my opinion). And finally, we are going to add a conditional statement which checks to see if the Google-selected canonical equals the user-selected Canonical. If it does, we’ll add “No” to the “Different Canonical” column, and if it’s different we’ll add “Yes”. Here is the formula you will include that accomplishes this task. Simply copy and paste this formula into the “Enter Formula” text box:

=if([indexStatusResult/googleCanonical]=[indexStatusResult/userCanonical],”No”,”Yes”)

Then click OK to apply the formula to the data that the API returned in the previous step. And then clip the step button in the Analytics Edge task pane to execute the final step in our macro, which is to write the results to a worksheet.

6. Check Your Results!
You can check the worksheet with the results to see the data. You should have a new column named “Different Canonical” that contains a “Yes” or “No” based on if the Google-selected canonical is different than the user-selected canonical.

7. Copy and paste the new formula to each macro in your spreadsheet.
Congratulations, you just extended your multi-site indexing monitoring system to check for canonical differences! Now apply the same formula to all of the worksheets you created in your spreadsheet (if you are checking more than one website or GSC property). The great news is that Analytics Edge has copy and paste functionality for macros (and for specific steps in your macros).

Just highlight the new formula you created in the task pane, click the copy button, select the macro you want to copy the formula to, click the step before where you want to add the formula, and then click paste in the task pane. Boom, you just copied the formula to another macro.

8. Check indexing and canonicalization all in one shot.
And that’s it. Your monitoring system will now check the indexing status of each url, while also detecting if the Google-selected canonical is different than the user-selected canonical. And as a reminder, all you have to do is click “Refresh All” in Analytics Edge to run all macros (which will check all of the GSC properties you are monitoring).

Important Reminder: The system is only as good (and accurate) as Google’s URL inspection system…
One thing I wanted to point out is that the indexing monitoring system is only as good as the data from Google’s URL inspection tool. And unfortunately, I’ve seen that be off sometimes during my testing. For example, it might say a url is indexed, when it’s not (or vice versa). So just keep in mind that the system isn’t foolproof… it can be inaccurate sometimes.

Summary – Continuing to improve the indexing monitoring system.
With this latest addition to the multi-site indexing monitoring system, we can now automatically check whether the Google-selected canonical is different than the user-selected canonical (which is a situation you definitely would want to dig into for urls not being indexed). Moving forward, I’ll continue to look for ways to improve the indexing monitoring system. If you decide to follow my set of tutorials for setting this up, definitely let me know if you have any questions or if you run into any issues. You can ping me on Twitter as you set up the system.

GG

Filed Under: google, seo, tools

  • 1
  • 2
  • 3
  • …
  • 34
  • Next Page »

Connect with Glenn Gabe today!

Latest Blog Posts

  • Amazing Search Experiments and New SERP Features In Google Land (2022 Edition)
  • Analysis of Google’s March 2022 Product Reviews Update (PRU) – Findings and observations from the affiliate front lines
  • How NewsGuard’s nutritional labels can help publishers avoid manual actions for medical content violations (Google News and Discover)
  • What Discover’s “More Recommendations”, Journeys in Chrome, and MUM mean for the future of Google Search
  • How to extend a multi-site indexing monitoring system to compare Google-selected and user-selected canonical urls (via the URL Inspection API and Analytics Edge)
  • Favi-gone: 5 Reasons Why Your Favicon Disappeared From The Google Search Results [Case Studies]
  • Google’s Broad Core Updates And The Difference Between Relevancy Adjustments, Intent Shifts, And Overall Site Quality Problems
  • Google’s December 2021 Product Reviews Update – Analysis and Findings Based On An Extended And Volatile Holiday Rollout
  • The Link Authority Gap – How To Compare The Most Authoritative Links Between Websites Using Majestic Solo Links, Semrush Backlink Gap, and ahrefs Link Intersect
  • How to identify ranking gaps in Google’s People Also Ask (PAA) SERP feature using Semrush

Web Stories

  • Google’s December 2021 Product Reviews Update – Key Findings
  • Google’s April 2021 Product Reviews Update – Key Points For Site Owners and Affiliate Marketers
  • Google’s New Page Experience Signal
  • Google’s Disqus Indexing Bug
  • Learn more about Web Stories developed by Glenn Gabe

Archives

  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • August 2021
  • July 2021
  • June 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • GSQi Home
  • About Glenn Gabe
  • SEO Services
  • Blog
  • Contact GSQi
Copyright © 2022 G-Squared Interactive LLC. All Rights Reserved. | Privacy Policy

We are using cookies to give you the best experience on our website.

You can find out more about which cookies we are using or switch them off in settings.

The Internet Marketing Driver
Powered by  GDPR Cookie Compliance
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

This site also uses pixels from Facebook, Twitter, and LinkedIn so we publish content that reaches you on those social networks.

Please enable Strictly Necessary Cookies first so that we can save your preferences!