The Internet Marketing Driver

  • GSQi Home
  • About Glenn Gabe
  • SEO Services
    • Algorithm Update Recovery
    • Technical SEO Audits
    • Website Redesigns and Site Migrations
    • SEO Training
  • Blog
    • Web Stories
  • Contact GSQi

Analysis of Google’s March 2022 Product Reviews Update (PRU) – Findings and observations from the affiliate front lines

May 2, 2022 By Glenn Gabe Leave a Comment

Google's March 2022 Product Reviews Update (PRU)

Almost four months since the last Product Reviews Update (PRU) rolled out, Google released the third in the PRU series on March 23, 2022. PRUs can cause a lot of volatility for sites with reviews content, and the first two were core update-like for some. With each PRU, Google is looking to continue its evolution with surfacing the highest quality and most insightful reviews content in the search results. And that means thinner, lower-quality posts should drop in rankings as more thorough content rises in the search results. More about that soon.

In this post, I’ll cover several important observations and findings based on the March 2022 Product Reviews Update. I am not going to cover the PRU overall, since I have done that heavily in my first two posts about the April 2021 and December 2021 Product Review updates. Instead, I’ll cover some interesting findings based on analyzing sites impacted by the March PRU (both surges and drops). That includes the types of content potentially helping sites win during the PRU, some lower-quality reviews content slipping through the cracks, more about dueling machine learning algorithms (broad core updates and PRU), the importance of review testing labs, the power of links (or not), and more. I’ll also revisit what I call the Wirecutter Standard with an interesting example of a site employing that strategy that missed the latest PRU cutoff.

Here’s a quick table of contents for those that want to jump around:

  • Periodic refresh still necessary.
  • Linking to multiple sellers.
  • Multimedia (especially video) helping sites, even when not original?
  • Content slipping through the cracks. A potential loophole.
  • Interesting Case: Employing the Wirecutter Approach and missing the PRU cutoff.
  • Watch for intent shifts. It could be Google, and not your content.
  • Dueling machine learning algorithms (again), and surfing the gray area.
  • Ignore user feedback at your own peril.
  • Testing Labs: Follow the leader and how review testing labs will continue to expand.
  • First-hand testing by reviewers. Is it necessary?
  • The Power of Links: inconsistent findings (again).
  • Key takeaways for site owners and affiliate marketers.

Reminder: PRUs Still Require A Periodic Refresh:
Regarding seeing changes over time, the PRU still requires a periodic refresh (as you can see via the massive swings in visibility during each rollout). So, Google still needs to “push a button” and roll out the update. So far, that’s been separated by a number of months (eight months in between the April and December PRUs and then almost four months in between the December and March PRUs). Just keep this in mind while working on remediation. You will need another PRU to roll out to see significant improvement (if you have been negatively impacted by a previous Product Reviews Update). I’ll cover more about dueling machine learning algorithms and the future of the PRU later in this post.

For example, I asked Google’s Danny Sullivan about the type of rollout when the first PRU launched in April of 2021:

At the moment, there's a periodic refresh. Unlike with core updates, we might not always post when a refresh happens given the more limited nature of content involved here. So overall, sites should consider the advice & keep working to it (true of core updates as well!).

— Danny Sullivan (@dannysullivan) April 9, 2021

Linking to multiple sellers: Not included in the algorithm yet, but showing up more and more.
With the December Product Reviews update, Google explained that sites should consider providing links to more than one retailer to purchase products. That surprised many affiliate marketers since Amazon is the dominant e-commerce retailer benefiting from affiliate links (and it’s actually against Amazon’s TOS to link to other retailers when using data via its API.)  

Google explained it was just a recommendation and not being used algorithmically in the PRU (yet), but that definitely was a shot across the bow of Amazon. Well, the March PRU rolled out and I didn’t see any mention of that factor being enforced. So, I pinged Google’s Alan Kent to learn more. Alan explained that Google was still not enforcing that aspect at the moment.

Hi Glenn. The update is an improvement of current algorithms. There is no special support for multiple sellers in this update.

— Alan Kent (@akent99) March 23, 2022

That’s good to know, but my recommendation is to link to more than one seller, if possible (to future-proof your site), but it’s not a requirement as of now. While analyzing the March PRU, I noticed many more affiliate marketers are indeed linking to multiple sellers, when possible. In the past, I saw many reviews linking to just Amazon. That has definitely changed based on the sites I’ve been analyzing and I’m sure Amazon is watching closely. That type of change could dilute their affiliate revenue a bit (as affiliate sites start linking to other retailers from their reviews content). We’ll see how this plays out…

For example, a site linking to two sellers from reviews content:

Reviews sites linking to multiple sellers

Here is another review linking to multiple sellers (four in this example):

A review site linking to multiple sellers to buy products

Video: A picture is worth a thousand words. And video can be worth ten thousand.
As part of Google’s best practices, they explained to “provide evidence such as visuals, audio, or other links of your own experience with the product, to support your expertise and reinforce the authenticity of your review.” And in my post about the April Product Reviews Update, I explained how original images, video, and gifs could help readers get a much better feel for a product.

Google's best practices for reviews sites regarding video and images.

Well, I’ve noticed an interesting trend while analyzing sites impacted by the PRU. I’m seeing much more video embedded in the articles. I think that’s great, but the devil is the in details. And this could be a weird loophole.

For example, if you produced an original video based on reviewing a product, that’s outstanding. But what if you didn’t shoot a video and simply embedded a video of the product from another creator, manufacturer, etc.? I’m seeing that technique used often while analyzing reviews and I think that could be a short-lived benefit.

If you are an affiliate marketer using video in your review articles, I would take a hard look at those videos and determine if they are truly helpful and if they reinforce your first-hand use of those products. Also, and this is just my opinion, but having original video is more powerful than leveraging someone else’s video. Actually, any site can embed the same exact video in their own review articles.

I know high quality video is not easy to produce, but it can really set your reviews apart from the competition. And if Google can figure out what’s truly original and insightful from a video standpoint, then having your own videos could only help (as long as they are high quality, insightful, and valuable for readers).

For example, here is an original video embedded in a review:

Review site with original video content.


A PRU loophole? Low-quality lists of products ranking well for some queries.
With previous Product Reviews Updates, I noticed some loopholes. There were some sites ranking with a very basic format (no review content actually). Although that specific loophole seemed to be closed leading up to the March PRU, I came across other examples of sites ranking with thin or low-quality reviews content. Actually, they weren’t really reviews. Instead, there was basically just a list of “best products” with minimal content, and those pages are ranking well for various review queries.

I can’t imagine this will stay as-is. I’m sure Google will pick up on this, refine the Product Reviews algorithm and handle accordingly. Whether that requires another Product Reviews Update, or if it happens before then, I expect those pages to sink in the rankings over time. If I were running those sites, I would definitely look to improve the pages that are ranking well now. They are far from the Wirecutter Standard, which is what I recommended trying to achieve in my previous posts about the PRUs. That’s a good segue to an interesting case I’ve been working on, which I’ll cover next..

First, here are two examples of urls surging with the March PRU that contain low-quality review content. Actually, it’s not even review content, it’s more like a list of visuals and links. Notice how they surge out of nowhere during the March PRU.

Loophole with Google's Product Reviews Update.
Page suring with Google's Product Reviews Update with low quality content.

Interesting Case: Employing a Wirecutter Approach but missing the cutoff:
Just like with broad core updates, sites should look to improve their reviews content significantly, and over the long-term. Google is using machine learning to evaluate sites and content over time, so quick and small changes will not suffice. Taking Google’s best practices to heart and implementing big changes across your reviews content is the way to get out of the gray area. That’s why I have recommended taking a Wirecutter approach to producing reviews content. You can read more about that in my previous posts, but publishing killer content, based on extensive first-hand testing and use, supported by original visuals and multimedia content, is a very strong approach to employ.

But… it’s not easy. It takes an enormous amount of time, energy, resources, money, etc.

Well, I’m helping a client that got hammered by the April PRU and then saw a partial recovery with the December PRU, that took my comments about employing a Wirecutter approach to heart. After analyzing the site, the content, user experience, etc., we spoke a lot about the Wirecutter Standard, and the site owner was all in. Over the past few months, they have mapped out their testing process, targeted certain categories for using a Wirecutter approach, and have already published a number of review articles based on that process.

And those are killer pieces of content.

Although produced by a small team, the new content is outstanding, provides a wealth of insightful and helpful information about the products being reviewed, provides their own rating system based on the areas being reviewed, they have original photos, gifs, and video that support the content, and more.

But for the March 2022, they missed the “cutoff”. The content was published right before the March Product Reviews Update rolled out. Therefore, those new killer articles weren’t going to help much when the March PRU rolled out.

On that note, Google is on record that recent changes aren’t reflected in major algorithm updates. Google needs to see significant changes over time, and over the long-term. The site just didn’t have the time…

Via @johnmu: Major impact from an algo update wouldn't be from *recent* changes. For larger sites, it can take Google's algorithms a longer time to adjust to site changes. It could take several months to recrawl, reindex, & reprocess the site changes: https://t.co/p0VbFtfOO7 pic.twitter.com/Nrpiety72k

— Glenn Gabe (@glenngabe) May 8, 2018

And here is Google’s John Mueller explaining a similar situation in a recent hangout video (when asked if a recent change could have led to a drop from a broad core update). John explained that the information used for broad core updates is collected over the long-term. And the same applies to an update like the Product Reviews Update (which is using machine learning when evaluating content and sites):

The timing was unfortunate for my client, but we are super-excited to see the next PRU roll out. I’ll post more information about how that goes after the next Product Reviews Update. If my client keeps on publishing Wirecutter-like content, then I would imagine they will see nice gains. We’ll see.  

Testing Labs: Follow the leader and how review testing labs will continue to expand.
Regarding “testing labs”, I’ve already covered Wirecutter heavily in my other posts about the PRU, but it’s worth mentioning that Good Housekeeping and Verywell also have their own testing labs. You can check out more information about those efforts by following the links below, but if you are producing reviews content, then I highly recommend trying to emulate what those companies are doing.

I know it’s not easy to do, but it can help future-proof your reviews content. The more you can map out a detailed review process, the more you will organically cover what Google’s algorithms are looking for. For example mapping out a ratings scale, providing pros and cons, actually testing out products (first-hand experience), producing visuals that support the testing (photos, videos, gifs, etc.), so on and so forth.

Wirecutter: https://www.nytimes.com/wirecutter/blog/anatomy-of-a-guide/
Good Housekeeping Institute: https://www.goodhousekeeping.com/institute/about-the-institute/a19748212/good-housekeeping-institute-product-reviews/
Verywell Testing Lab: https://www.verywellfit.com/commerce-guidelines-and-mission-4158702

The Good Housekeeping Institute:

Good Housekeeping Institute

Wirecutter: The New York Times

Wirecutter Reviews by the New York Times

The Verywell Testing Lab:

The Verywell Testing Lab


Do you need to test each product you are reviewing? Is first-hand use and experience required?
Over the past several months, I’ve received questions from site owners about the importance of first-hand testing of products and how necessary that is moving forward (since some products are not easy to test or consume). For example, when the December 2021 Product Reviews Update rolled out, Google explained that “users have told us that they trust reviews with evidence of products actually being tested…” And they included a new best practice for site owners explaining just that.

Google's best practice about first-hand use and testing for product reviews.

But for some products or services, it’s not easy (or even possible sometimes) to actually test a product, consume a product, or use a service in order to gain first-hand knowledge of how they work. Given those challenges, what does Google say about the situation? Well, Google’s Alan Kent has provided more information via Twitter and I wanted to include that information below.

Alan explains that it’s not always necessary to test or consume a product in order to write a high quality review. But he does warn that site owners and affiliate marketers should not just spin a description from a manufacturer as the core review content.

He said don’t expect a big boost if you simply say you tested it yourself and basically paraphrase the manufacturer description. And in another tweet, Alan explained to think about how you can add to the current body of knowledge for a given product (while avoiding simply providing the specs for a product that’s supplied by the manufacturer).

Here are Alan’s tweets. The first was in response to a question about supplements (and if the people reviewing the supplements were required to have tried the actual products). Alan says no.

You can certainly create a useful review without eating the product. E.g. people know too much sugar is not good for you. But dont expect big boosts if the review only adds a few sentences saying "I tested it myself too" with the rest paraphrasing the original product description

— Alan Kent (@akent99) April 13, 2022

And the second tweet from Alan was in response to an observation that some sites are claiming to have a product testing lab, but a number of reviews don’t explain that the products were actually tested. Alan explained that contributing new information to the body of knowledge about a product would be smart, but just repeating specs from the manufacturer website doesn’t really add any value.

Another way to think about it is does the review contribute new information to the body of knowledge about the product? I could test a car tire using a machine instead of on my own car. But just repeating the specs from the tire website with different words adds nothing.

— Alan Kent (@akent99) May 5, 2022

My take on first-hand testing:
If you are going to thoroughly review a product, it’s a wise idea to actually test and use that product. Doing so can give you a much stronger understanding of how the actual product works, which can yield a much stronger review. It can also yield original photos and video of you testing the product, which can be extremely helpful for readers.

But for products or services you can’t easily test out yourself, then provide as much unique information as you can without simply spinning information that can be found elsewhere. Like Alan explained, see what you can add to the current body of knowledge for a product. Add as much value as you can for the reader.


A quick note about intent shifts. It’s not you, it’s Google.
In my post about the December Product Reviews Update, I mentioned that there were some intent shifts going on where e-commerce retailers started ranking for reviews content, and review sites dropped to page two or beyond. And on the flip side, sometimes when e-commerce retailers were ranking well, then an intent shift happened and reviews content started to rank higher (pushing the e-commerce retailers lower).

This was typically happening with head terms (so queries lacking “best”, “reviews” or “compare”). Well, we saw that again with the March PRU. The reason I bring this up is because sometimes it’s not your content that’s the problem. It could just be an intent shift, which you have no control over. I covered that in my post about the difference between relevancy adjustments, intent shifts, and overall site quality problems.

So, if you see a drop during the PRU, definitely run a delta report and determine the root cause of the drop. And if it’s an intent shift, you might not need to radically improve your content (if it’s already high quality, insightful, valuable, etc.)

Here is an example of an intent shift happening with the December Product Reviews Update and then reversing with the March PRU. The site had no control over this…

Intent shifts during Google's Product Reviews Update

Google’s dueling machine learning algorithms are… still dueling: And this needs to be addressed (IMO).
In my post about the December Product Reviews Update, I mentioned dueling machine learning algorithms and how that’s a problem for Google. That’s where sites either surged or dropped during broad core updates, and then saw the opposite movement with a Product Reviews Update.

Well, I saw more of that with the March Product Reviews Update. Sites that were impacted in June, July or November with broad core updates saw the opposite movement with the March PRU.

With that happening, Google is sending serious mixed signals to site owners. For example, is the site’s content high quality, or not? Only Google’s machine learning systems know. Muahahaha. :)

Dueling machine learning algorithms with Google's broad core updates and Product Reviews Updates

It’s also a good time to reiterate that Google is using machine learning with both broad core updates and the Product Reviews Update, so it’s not like they are using 10, 20, or even 100 factors. Google could be sending many more signals to the machine learning system and then letting the system determine weighting (and ultimately rankings).

Again, welcome to SEO. Bing has explained more about that in the past. Here is Fabrice Canel on how Bing is using machine learning with its core ranking algorithm. They send “thousands of signals to the machine learning system and it determines the weighting”. This is ultra-important to understand. I linked to the video from my tweet below.

How much does a certain factor matter for SEO? Via Bing's @facan We simply don't know. Bing is heavily using machine learning. We don't set the weighting. It's about sending thousands of features to the ML system & the system figures it out: (at 35:02) https://t.co/EiTktEFqx7 pic.twitter.com/HTzu9wkA5m

— Glenn Gabe (@glenngabe) November 9, 2020

Also, I do believe the Product Reviews Update will be incorporated into Google’s core ranking algorithm at some point (and that will be a good thing). In my opinion, you can’t have a major algorithm update focused on quality impact a site one way and then another algorithm update focused on quality reviews impact the site in the opposite way. That’s maddening for site owners and makes no sense. But before that happens, Google needs to expand the PRU to other languages beyond English. That hasn’t happened yet, so I believe that will happen first and then maybe the PRU gets baked into Google’s core ranking algorithm. Again, we’ll see.

Google's Product Reviews Update and expanding to other languages.

Keep your eyes peeled. Ignore user feedback at your own peril.
I’ve covered the power of user studies before (especially with regard to Google’s broad core updates). It can be incredibly powerful to hear directly from objective users as they browse your site, consume your content, etc. But sometimes you can gain some of that feedback without even running a user study.

For example, I was analyzing one site that was negatively impacted during the March PRU that had user comments on each review page. Well, the comments can be telling… I found several comments on articles hammering the quality of reviews or questioning the expertise of the authors.

For example, “the reviewer clearly doesn’t know what they are talking about”, “how about updating the article”, and more.

Here is an example of what that looked like. The image has been slightly edited to protect the innocent. :)

User comments as feedback for review site owners.

That is incredible feedback for the site owner and they should take it to heart. Most users will not spend the time to post a comment like that, so it must be really bad if they are leaving those comments. And I’m not saying Google is using those comments directly when evaluating reviews (although it could absolutely be one of the many signals that are being sent to the machine learning system). But if I was the site owner, I would take that feedback to heart and figure out what needs to be updated. Then move as quickly as possible to improve the content. And maybe running a full-blown user study would be a smart next step.

Links. Still not the end-all for the Product Reviews Update.
In my posts about the April and December 2021 Product Reviews Updates, I explained how links were not the end all. For example, some sites surging had weaker link profiles overall and some sites dropping had stronger link profiles. Basically, there was not a clear connection between the strength of the link profile and how the site performed with the Product Reviews Update. Again, that could be the impact of a machine learning system that takes many signals into account and determines weighting.

So has that changed with the March PRU?

Not really. I’m still not seeing a major connection between link profile strength and how review sites are performing during PRUs. Sure, some powerful sites are surging, but is it because of their link profile? There are plenty of examples of the opposite… For example, sites with much weaker link profiles surging as well. Anyway, it’s just worth noting since it’s clear that Google’s machine learning-based PRU algorithm is using many signals (and many of those signals seem more focused on the quality of content).

Here are two examples of sites surging during the March PRU with weaker link profiles:

Site with weak link profile surging during the March 2022 Product Reviews Update.
Site with weaker link profile surging during the March 2022 Product Reviews Update.

And here are two sites dropping during the March PRU with stronger link profiles:

Site with strong link profile dropping during the March 2022 Product Reviews Update.
Site with strong link profile dropping during the March 2022 Product Reviews Update.

Key takeaways and tips for affiliate marketers and site owners:

  • Internalize Google’s best practices: Read Google’s best practices and take them to heart. Internalize them and then form a plan of attack for improving your reviews content.
  • Run a user study: User studies are absolute gold for SEO. Leverage Google’s best practices for product reviews and craft tasks and questions. Then use a strong platform for user testing (like usertesting.com). Gain feedback, watch video, and listen to users. The results can be enlightening.
  • Strive to be the Wirecutter of your niche: As I mentioned in my previous posts about the Product Reviews Update, work to become the Wirecutter or Good Housekeeping Institute for your niche. Yes, it’s challenging to do that, but it can pay huge dividends down the line.
  • Give readers multiple buying options: Link to multiple sellers for purchasing products (beyond just Amazon). It’s a best practice from Google… even if they say it’s not being enforced (yet). It’s a smart way to future-proof your reviews content (and protect from subsequent negative PRU impact).
  • Invest in visuals: Provide original photography, video, and gifs supporting your reviews content. It’s a great way to provide users with a killer view of the products you are covering while also showing users how you actually tested the products. Google has explained it’s looking for these things (it’s a best practice), and it can set you apart from the crowd. You can also repurpose that multimedia content for use on social media (like YouTube, Tiktok, Instagram, etc.) It’s a win-win.

Summary: The PRU continues to evolve.
Google’s March 2022 Product Reviews Update was another powerful update for affiliate marketers. It was the third in the series, and we can expect more as the PRU continues to evolve. Like broad core updates, Product Reviews Updates roll out just a few times per year. Therefore, if you have been negatively impacted by the latest PRU, then I highly recommend forming a strong plan of attack. The more you can significantly improve your reviews content, and over the long-term, the better position you can be in when the next PRU rolls out. Good luck.

GG

Back to the top>>

Filed Under: algorithm-updates, google, seo

How NewsGuard’s nutritional labels can help publishers avoid manual actions for medical content violations (Google News and Discover)

April 15, 2022 By Glenn Gabe Leave a Comment

In July of 2021, Google issued a number of warnings for sites publishing medical content that went against its guidelines (for Google News and Discover). The potential for a manual action was clear and some publishers scrambled to figure out what to do.

I mentioned this on Twitter in September:

Google adds information to help docs about displaying Discover Manual Actions in GSC

I've seen several examples of Discover policy violation warnings since early July. Will manual actions follow soon? Time will tell. :) https://t.co/huFckYCTr8 via @tldrMarketing pic.twitter.com/kCKImHjnhC

— Glenn Gabe (@glenngabe) September 2, 2021

And six months from the warnings, manual actions arrived for sites that hadn’t cleaned up the problem. Here is my tweet from January when Google issued the manual actions:

Heads-up. Don't ignore Discover & Google News policy warnings in GSC. It might take 6 months or a year, but a manual action could follow. Had multiple publishers reach out this weekend about manual actions for Discover/Google News. E.g. misleading content, medical content, etc. pic.twitter.com/JutaP82HQL

— Glenn Gabe (@glenngabe) January 30, 2022

To clarify, these were manual actions for Google News and Discover, and not Search. And for the publishers receiving manual actions for medical content, the medical policy for News and Discover states that Google “doesn’t allow publishing medical content that contradicts or runs contrary to scientific or medical consensus and evidence-based best practices.”

And the manual actions in Google Search Console explained the following:

“Your site appears to violate our medical content policy and contains content primarily aimed at providing medical advice, diagnosis, or treatment for commercial purposes. Nor do we allow content from any site that contradicts or runs contrary to scientific or medical consensus and evidence-based best practices.”

So, if you are publishing medical content, and receive a manual action for violating that policy, News and Discover visibility can be negatively impacted. Again, Search should not be impacted by the manual action, but Google News and Discover visibility could decline.

For example, here is the Discover performance for one of the flagged articles for a publisher that received a manual action:

Google Discover performance for a page impacted by a manual action for medical content.

When digging into the articles being flagged by Google, it was super-interesting to see the connection between NewsGuard ratings and the organizations that were covered heavily in the articles. Below, I’ll cover more about NewsGuard and how it could be helpful for sites publishing health and medical content.

Interesting cases and the connection between flagged content and NewsGuard ratings:
In 2018, I wrote a post covering NewsGuard, which I called a proxy for Google’s quality raters. NewsGuard has a team of analysts (trained journalists) that review websites based on nine journalistic criteria, including credibility, transparency, and trust. They originally started by focusing on news organizations, but they have expanded to health and medical as well. For example, there is now a HealthGuard service that, “helps patients, healthcare workers, and anyone involved in the medical field identify trustworthy sources of health information — and avoid dangerous misinformation.”

Once a site is reviewed, NewsGuard produces a “nutritional label” rating the site, which can also appear in the search results if you are using its Chrome plugin. In addition, NewsGuard has relationships with a number of organizations (in several capacities). For example, Bing, Facebook, the American Federation of Teachers (AFT), the World Health Organization (WHO), and others have partnered with NewsGuard to fight disinformation. You can read more about their various partnerships on the site.

Although NewsGuard does have partnerships with several organizations for helping fight misinformation and disinformation, I want to be clear that Google does not use NewsGuard data in its algorithms. But like I explained in my first post, those ratings sometimes line up with how the sites perform in organic search (since Google is also trying to algorithmically surface the highest quality and most authoritative content on the web).

It’s important to understand that Google is on record explaining that its algorithms can be more critical when it comes to health and medical content. Here is a Twitter thread of mine that expands on that point. Again, this is super-important to understand for anyone delving into health and medical content.

Run a health/medical e-commerce site? Via @johnmu: Our algorithms are more critical for health/medical topics, so def. keep E-A-T in mind. Make sure the site represents a very high standard. i.e. High-quality content created by actual medical professionals https://t.co/aiMrdN9Hl7 pic.twitter.com/Nuz3K7Pi6o

— Glenn Gabe (@glenngabe) March 27, 2021

For example, here is a health site with a horrible nutritional label from NewsGuard. The site has gotten hammered during broad core updates over time. Again, it’s not because of NewsGuard… it’s just interesting how they line up:

Health and medical site that dropped over time during Google's broad core udpates.

Cross-referencing organizations via NewsGuard based on manual actions for medical content:
For organizations receiving manual actions for medical content (News and Discover), I was interested in cross-referencing NewsGuard to see what the nutritional labels looked like for the organizations being covered (and promoted) in those flagged articles.

And to clarify, it’s not about simply mentioning sketchy organizations that would get content flagged. It’s more about the core of the article being about those organizations (including promoting their views). That’s exactly what the articles were doing that were flagged.

So what did the nutritional labels look like for those organizations being covered? They weren’t good. Not good at all… Here are two examples based on content getting flagged.

Here’s the first site’s label:

NewsGuard nutritional label with extremely poor ratings for a health and medical site.

And here’s the second site’s label:

NewsGuard nutritional label with poor ratings for a health and medical site.

And here is what one of the sites look like in the search results (when you are using the NewsGuard Chrome extension):

NewsGuard rating in the search results for a site with poor ratings.

When you hover over the NewsGuard icon (the red shield), you can view an overlay with more details. And that overlay contains a link to the full nutritional label on the NewsGuard website.

NewsGuard overlay with more information from a site's nutritional label.

When you visit the nutritional label on NewsGuard’s website, you can find all of the details about why the site received those ratings (and by category). And that includes all of the sources that were cited and referenced in their findings. For example, you can view CNN’s nutritional label here (just to get a feel for what one looks like, review the ratings by category, the sources section at the end, etc.)

Note, the site I mentioned that received the manual action is a large-scale publisher with millions of pages indexed, so most of the content would not fall into this category (covering organizations and views that go against Google’s guidelines). But, they do have some… and they were flagged by Google.

When discussing this situation with the site’s leadership, I explained that having some checks in place would be smart for understanding the risks involved with publishing certain pieces of content. And in my opinion, NewsGuard could be one of those checks.

Utilizing NewsGuard as a check during the publishing process:
So, if you are a site publishing health and medical content, then I would definitely put some checks in place to ensure you don’t receive a manual action for medical content. One solid approach could be adding checks using the NewsGuard plugin (which links to the nutritional labels). If you see red all over the label, you might want to be more cautious (or at least dig in further to learn more about that organization’s views).

For example, if the publisher I’m covering in this post that received the manual action checked NewsGuard before publishing that content, then they probably wouldn’t have published it at all (as long as they understood Google’s policies around medical content for News and Discover). Again, it’s a large-scale publisher with millions of pages indexed. A NewsGuard check could have raised red flags during the editing process.

Note, NewsGuard obviously doesn’t have labels for every site on the web, but understanding the ratings based on the organizations that have been reviewed is a good idea. Again, it was interesting to see the connection between some manual actions for medical content and the sketchy nutritional labels for those organizations being promoted in those articles. Like I explained in my original post about NewsGuard, it’s like a proxy for Google’s quality raters. So in my opinion, it’s smart to check those nutritional labels before publishing.

GG

Filed Under: algorithm-updates, google, manual-actions, seo

What Discover’s “More Recommendations”, Journeys in Chrome, and MUM mean for the future of Google Search

March 18, 2022 By Glenn Gabe Leave a Comment

How Google is building a true Search assistant that can provide a wealth of recommendations based on the topics users are searching for. And it’s partially live now in Google Discover with a focus on products.

Google providing shopping recommendations in a new Task Dashboard.

I’m a heavy Google Discover user and have covered many of the features rolling out, or being tested by Google there, over the years. Discover is Google’s feed of content that’s tailored for each user based on their activity across Google’s ecosystem. Over 800M people use Discover each month (and that’s an old statistic from 2018, so the actual number of users is probably much higher at this point).

As I browse Discover, I often come across new features that Google is testing, which can sometimes show you the direction Google wants to move towards functionality-wise… Over the past few weeks, I’ve seen a very interesting feature, that when combined with some of the other advancements going in Search, made me think Google could be foreshadowing the future of Search.

That’s a bold statement, but hear me out… In this post, I’ll combine a few different developments in Search to support what I’m saying. And if you’re a site owner publishing content to help people that are researching topics, you should definitely pay attention to what Google is doing here. These advancements could very well impact what you do and the results you see.

“More Recommendations” in Discover:
Like many people, I’m often researching new products or services on Google. You might start with a broader query, dig in further, refine your query to be more specific, visit multiple sites, check out YouTube videos covering the topic, and more. Other than bookmarking certain pages, you really don’t have a good trail left behind based on your research…

So, you might try and find those articles again, visit Google and YouTube multiple times again, and retrace your path. That’s not the most efficient or powerful way to research topics or products. And I know Google understands this (based on announcements about MUM and Journeys, which I’ll cover later in this post).

Well, I was checking my Discover feed a few weeks ago and saw an interesting call to action under one card in my feed. It said, “More recommendations” and that’s not something I had seen before. When tapping that link, I was whisked to an immersive interface labeled “Task Dashboard” which contained a boatload of information, links, recommendations, videos, and comparison functionality based on the product or service I was searching for.

My immediate reaction was holy cow, is this the future of Search?? And by “future”, I mean the next few years. The real future of Search if much more immersive, intimate, and ambient in my opinion… but I’ll cover that at a later time.

The best way to explain what I’m seeing consistently now is to just show you. Here is what the process and screenshots look like for my research of Costa sunglasses (my favorite brand of sunglasses).

First, here is a card in my Discover feed with a “More recommendations” call to action under an article about Costa sunglasses:

More recommendations in Google Discover for products.

When tapping that button, it takes me to a “Task Dashboard” with a boatload of information, videos, recommendations, comparison functionality, etc.

Discover's Task Dashboard containing articles, videos, and comparison functionality for products recently researched.

As you can see, Google is becoming like a true shopping assistant with everything it’s providing on the page. I have articles I’ve visited, other articles that might be helpful, videos that might be helpful, a People Also Ask module, comparison features that take me to a new SERP comparing the products I selected, and more. Seriously, try and find this on your own and spend some time playing around. I think you’ll be blown away.

Here is the task dashboard containing a “continue browsing” section:

Continue browsing and similar products in Discover's task dashboard.

And beyond that, I found a comparison feature which lets me tap several types of Costa sunglasses and then visit a Google SERP that compares them. And yes, read that again, A GOOGLE SERP that compares them (and not an article comparing them). If you’re an affiliate marketer comparing the top products in a niche, that should definitely catch your attention.

First, here is the comparison functionality in my “Task Dashboard”:

Compare product functionality in Discover's task dashboard.

And once I select two products, I’m taken to a fresh SERP comparing the two (with data from Google’s Shopping Graph).

Google search results comparing products based on Google's Shopping Graph.

The Connection To Collections in the Google App:
If some of this looks familiar, and you are one of the people using Collections in the Google Search app, then that’s because Collections provide a similar experience. The title label is different, but the various parts of the collection are similar.

Google Collections with similar functionality to the task dashboard in Discover.

So, Google is expanding on collections functionality and tying it more to helping users with their research journey. It’s a smart approach and I did find it helpful. Regarding helping with a user’s “journey”, that’s a good segue to Journeys in Chrome.

Journeys in Google Chrome:
Last year, Google started testing Journeys in Chrome Canary. It’s a way to help you resume your research of a given topic. By visiting your history in Chrome, you can find a new tab for “Journeys”, which lists the sites and articles you visited based on searching for a certain topic. It doesn’t provide a ton of functionality but is helpful for picking back up where you left off.

Here is a screenshot of Journeys (provided by Google in their blog post about the feature):

Google Journeys in Chrome.

In addition, Google recently announced that Journeys is rolling out to the stable version of Chrome for desktop. Note, I have not seen this officially roll out yet… so we are still waiting. Regarding how it will work in the stable version of Chrome, Google explained you might see a prompt when searching on desktop to “Resume your research” which will then connect with Journeys to help you continue researching a topic.

Connecting Discover’s “More recommendations” with Journeys:
By this point in the post, you might see where I’m going with this… The Discover functionality for “More recommendations” which takes you a killer page where Google assists you with your research could be how Journeys potentially work in Search. Imagine searching Google and seeing a call to action in the search results to “Resume your research”, which then takes you a page like the one I showed you from Discover containing a boatload of helpful information, links, videos, comparison functionality, and more.

Yes, I think this is where we might be headed (and soon). And beyond what I covered, what if Google added a powerful new technology that could understand and recommend even more content related to your research? That’s where MUM could come in handy.

Adding MUM to the equation:
At ‘Search On 2021’, Google covered its new technology called MUM (or Multitask Unified Model), which Google explained is 1000X more powerful than BERT (another algorithm that helps Google understand content and queries). Using MUM, Google can understand your query much deeper and return different types of content based on your query. It’s also trained on 75 different languages and can generate content versus just understanding it.

Google MUM can fuel shopping recommendations.

As an example of how MUM might be used for what I’m covering in this post, if you were asking “how to prepare for deep sea fishing”, Google could provide information about the best ways to prepare for the trip, but also provide product recommendations based on the conditions. For example, maybe the recommendations page contains information about gear (like sunglasses). And Google might provide the best Costa sunglasses for deep sea fishing (taking various things into account like strong wind, salty air, and the need for sunglasses to remain secure while fishing). I’m just riffing here, but you get the picture. It would be like a deep sea fishing guide was helping you…

Google has explained recently that MUM is not being used in Search for rankings yet, but that it looks forward to providing more intuitive ways to Search in the near future (especially with Lens). We are seeing more features roll out based on ‘Search On 2021’ like “Broaden this search” and “Refine this search”, so I’m sure MUM is not far behind (with regard to helping with scenarios like this).

I won’t go too deep about MUM in this post, but I think it could be extremely powerful for providing a wealth of information for users as they research topics (and what could show up in the “More recommendations” results I covered earlier). And MUM could also help with Journeys as it provides related content (across formats) based on your previous research.

What this means for site owners and SEOs:
I think this could be both exciting, and scary, for site owners and SEOs. On the one hand, there might be more opportunities for you to rank across surfaces. This new “recommendations flow” that Google is presenting provides many different types of content organized in the “More recommendations” page. So you could have your articles, review pages, videos, web stories, and more show up there. That’s great, but it can also detract from the core search results (where you might be ranking well now).

For example, let’s say you ranked #3 for “how to prepare for deep sea fishing” but Google provided some type of call to action to “research more recommendations” or something like that. And that call to action sends users to an immersive “More recommendations” page like what I’m seeing in Discover. And that page has many links to different pieces of content, including articles, videos, People Also Ask, related searches, and even comparison functionality that triggers a fresh SERP comparing the products you selected. It can become a crowded field for sure… providing many more options for users than what they see now in a standard SERP.

A mockup of what a “More recommendations” SERP feature could look like in the Google search results:

Mockup of what the recommendations feature could look like in the Google Search Results.

Moving forward: Google is becoming more like a true shopping assistant.
I’m sure we’ll see Google test this type of functionality heavily before officially rolling out anything in the search results, but in my opinion it’s super important for site owners to understand this is going on now in Google Discover. And Journeys is rolling out soon in Chrome for desktop (with a prompt from the SERPs). In addition, MUM will be used soon in Search for ranking (and in several ways). And… the combination of all of that could be a powerful new experience that tries to help users research topics more thoroughly.

My recommendation, pun intended, is to start researching how this is working now, figure out if you have content (across types) that can be presented in this new format, and identify gaps. Then fill those gaps by creating a content plan addressing your site’s vulnerabilities.

I’ll keep a close eye on this and share what I’m seeing on Twitter (or maybe even in additional blog posts). Until then, I would visit Discover and pay close attention to the details there… You might just see the future of Search. :)

GG

Filed Under: google, seo

How to extend a multi-site indexing monitoring system to compare Google-selected and user-selected canonical urls (via the URL Inspection API and Analytics Edge)

March 16, 2022 By Glenn Gabe Leave a Comment

Last month I published an article on Search Engine Land explaining how to use the new URL inspection API to build a multi-site indexing monitoring system. By using Analytics Edge in Excel with the new URL Inspection API from Google, you can check the indexing status for the most important urls across multiple sites on a regular basis (and all by just clicking a button in Excel). It’s a great approach and can help you nip indexing problems in the bud. Remember, if your pages aren’t indexed, they clearly can’t rank. So monitoring indexing is super important for site owners and SEOs.

After I published the article, it was great to see people in the industry test out this approach, and I’ve heard from quite a few that they use it on a regular basis. That’s outstanding, but I think systems like what I originally built can always be enhanced… As I was using the system to check indexing levels across various client sites, I came up with a simple, but powerful, idea for extending the system. And it relates to canonicalization.

First, it’s important to understand that rel canonical is just a hint for Google. I’ve covered that before in case studies, other blog posts, and heavily on Twitter over the years. Google can definitely ignore what site owners include as the canonical url and then choose a different urls (based on a number of factors). And when Google selects a different url as the canonical, you definitely want to know about that. That’s because the url being canonicalized will not be indexed (and won’t rank in the search results). This can be fine, or not fine, depending on the situation. But you definitely want to dig in to see why Google is choosing a different canonical than what you selected.

Luckily, the URL Inspection API returns both the user-selected canonical and the Google-selected canonical when inspecting urls. So, via some Analytics Edge magic, we can compare the two columns returned by the API as the urls are being processed, and flag that in our worksheets. It’s just another level of insight that can help you address indexing problems across the sites you are monitoring.

What we are going to achieve: Comparing canonicals via the URL Inspection API.
As I explained above, we are going to add another step in the indexing monitoring system to compare the user-selected canonical with the Googles-selected canonical. And we are going to dynamically create a new column in each worksheet that lets us know if there is a difference between the two. 

And as a quick reminder, we will be doing this across all sites that are included in our indexing monitoring system (which can span as many GSC properties as you want). If you followed my original tutorial, then you can easily add this additional step in your system to check canonicalization across your top urls. And if you didn’t already set up an indexing monitoring system, then I would do that first and then come back to add this step.

With that out of the way, let’s enhance our system!

How to extend an indexing monitoring system by comparing Google-selected and user-selected canonicals:

1. Set up the foundational indexing monitoring system:
First, follow my original tutorial for setting up the indexing monitoring system. Once you have that up and running, we are going to add an additional step for comparing the user-selected and Google-selected canonical urls. And then we’ll dynamically create a new column in each worksheet called “Different Canonical” that flags if they are different.

2. Add a step to the macro in Analytics Edge:
In order to add another step to our macro in Analytics Edge, you simply run the macro to the point where the new instruction will be added and then add the new functionality. You can accomplish that via the “Step” button in the task pane. First, open your spreadsheet, click the Analytics Edge tab, and open the task pane (which holds your macros).

3. “Step” to your desired location in the macro:
Click the instruction in the task pane BEFORE where you want to add the new function. Since we are going to compare data after the API returns results, we will add our new function after the “Index Inspection” step in our macro. So click “Index Inspection” in the task pane and then click the step button (which is located next to the run button). After the macro executes to that point, you can add additional functionality to the macro. For our purposes, we are going to add a Formula function that will compare columns after the API returns results for each url.

Note, this will only run the macro that’s showing in the task pane. It will not refresh ALL macros in the spreadsheet. So if you are monitoring several sites, and each site has its own macro, then those will need to be updated separately. I’ll cover how to do that later in the tutorial.

4. Add a new formula for comparing canonicals:
Once the macro runs to the point we indicated in the previous step, Analytics Edge will stop running the macro. And then you can add the new function for comparing the Google-selected and user-selected canonical urls. To do that, click the Analytics Edge tab, and then click the Column dropdown, and select “Formula” from the dropdown list.

5. Add the conditional statement in the formula dialog box:
In the formula window, enter a name for the new column you want to add based on the formula we will create. You can use “Different Canonical” for this tutorial. Next, select where the column should be added in our worksheet. I want to put the new column right after the userCanonical column in the worksheet (which makes the most sense in my opinion). And finally, we are going to add a conditional statement which checks to see if the Google-selected canonical equals the user-selected Canonical. If it does, we’ll add “No” to the “Different Canonical” column, and if it’s different we’ll add “Yes”. Here is the formula you will include that accomplishes this task. Simply copy and paste this formula into the “Enter Formula” text box:

=if([indexStatusResult/googleCanonical]=[indexStatusResult/userCanonical],”No”,”Yes”)

Then click OK to apply the formula to the data that the API returned in the previous step. And then clip the step button in the Analytics Edge task pane to execute the final step in our macro, which is to write the results to a worksheet.

6. Check Your Results!
You can check the worksheet with the results to see the data. You should have a new column named “Different Canonical” that contains a “Yes” or “No” based on if the Google-selected canonical is different than the user-selected canonical.

7. Copy and paste the new formula to each macro in your spreadsheet.
Congratulations, you just extended your multi-site indexing monitoring system to check for canonical differences! Now apply the same formula to all of the worksheets you created in your spreadsheet (if you are checking more than one website or GSC property). The great news is that Analytics Edge has copy and paste functionality for macros (and for specific steps in your macros).

Just highlight the new formula you created in the task pane, click the copy button, select the macro you want to copy the formula to, click the step before where you want to add the formula, and then click paste in the task pane. Boom, you just copied the formula to another macro.

8. Check indexing and canonicalization all in one shot.
And that’s it. Your monitoring system will now check the indexing status of each url, while also detecting if the Google-selected canonical is different than the user-selected canonical. And as a reminder, all you have to do is click “Refresh All” in Analytics Edge to run all macros (which will check all of the GSC properties you are monitoring).

Important Reminder: The system is only as good (and accurate) as Google’s URL inspection system…
One thing I wanted to point out is that the indexing monitoring system is only as good as the data from Google’s URL inspection tool. And unfortunately, I’ve seen that be off sometimes during my testing. For example, it might say a url is indexed, when it’s not (or vice versa). So just keep in mind that the system isn’t foolproof… it can be inaccurate sometimes.

Summary – Continuing to improve the indexing monitoring system.
With this latest addition to the multi-site indexing monitoring system, we can now automatically check whether the Google-selected canonical is different than the user-selected canonical (which is a situation you definitely would want to dig into for urls not being indexed). Moving forward, I’ll continue to look for ways to improve the indexing monitoring system. If you decide to follow my set of tutorials for setting this up, definitely let me know if you have any questions or if you run into any issues. You can ping me on Twitter as you set up the system.

GG

Filed Under: google, seo, tools

Favi-gone: 5 Reasons Why Your Favicon Disappeared From The Google Search Results [Case Studies]

February 22, 2022 By Glenn Gabe Leave a Comment

favicons in Google Search

They say “a favicon is worth a thousand words”. OK… they really don’t say that, but favicons can definitely be important from a Search perspective. In 2019, Google started displaying favicons in the mobile search results as part of a mobile redesign, and it ends up that those little graphics in the SERPs can sure help on several levels. For example, a favicon can help reinforce your brand, it can attract eyeballs in a crowded SERP, and it can also help with click-through rate. So you definitely want to make sure your favicon game is strong.

Favicons in the Google search results.

Google published guidelines for defining a favicon in order to make sure they can be properly displayed in the SERPs. If you don’t adhere to those guidelines, Google can choose to ignore your favicon and provide a generic one for you. And there’s nothing more “meh” than the generic globe favicon Google provides. Let’s just say you won’t stand out in the SERPs with their generic favicon showing…

Generic favicon in Google search results.
Generic globe favicon in the Google search results.

In addition, you can end up with a blank favicon, which is super-awkward. The space for the favicon is reserved, but nothing shows up. It’s a just a blank white space where a favicon should appear. So sad… and I’ll explain more about that later in the post.

Blank favicon in the Google search results.

Here is another example of a blank favicon (and not just the generic globe favicon):

Missing favicon in the Google search results.

Favicon Assistance: When site owners reach out about favicon problems.
Every now and then I have site owners reaching out in frustration when their favicons go missing from the search results. When that happens, it can be a very confusing situation for those site owners… Well, I recently just helped a few more site owners over the past few weeks troubleshoot favicon problems in the search results. And based on what I found, I figured I would write a post explaining some of the top reasons I’ve seen that cause favicon problems in Google Search.

The problems are relatively easy to fix and changes can be picked up by Google pretty quickly for most sites. For example, one of the latest fixes I helped with was picked up in just a few hours and the SERPs were updated in less than a day (with the new favicon).

Favicons Disappearing and Questions About Quality:
When favicons go missing, some site owner immediately jump to thinking that Google somehow doesn’t trust their site  anymore or that there are quality problems causing Google to stop displaying their favicons (like how rich snippets can be impacted by broad core updates). That’s not the case. Favicons going missing in the SERPs have nothing to do with site quality. Instead, it has everything to do with technical problems with the favicons, or violating Google’s guidelines for providing favicons.

So if your favicon goes missing, it’s not that Google has suddenly reevaluated your site quality-wise. It’s probably due to technical issues or other guideline violations (which I’ll cover below). 

Where did your favicon go? Troubleshooting common favicon problems in Google Search.
Below, I’ll cover several common problems I have seen while helping site owners troubleshoot favicons that disappear from the search results (or favicons that just aren’t displayed properly by Google).

1. Wrong dimensions, no favicon for you…
This is the  most common issue I have seen. Google has explained in detail that favicons must be a multiple of 48×48 pixels. So, make sure your favicon is at least 48×48 or a multiple of 48×48. For example, 96×96, 144×144, etc. Don’t have a favicon that’s smaller than 48×48.

Favicon dimension guidelines from Google.

For example, a site used the following image as its favicon (blurred to avoid calling out the site). It was 50×50 and not a multiple of 48×48 pixels. Google just used the generic globe favicon. Again, meh in the SERPs.

Favicon with the wrong dimensions.

Also, the aspect ratio is important. If it’s not a square, it’s not going to work well. I’ve seen favicons that looked out of whack from an aspect ratio standpoint, or they just didn’t show up in the SERPs. For example, a site used the favicon below, which didn’t have a square ratio. Google forced it to fit the required aspect ratio (and it looked totally warped in the SERPs). Beware.

Favicon with the wrong aspect ratio.
Example of favicon with the wrong aspect ratio in the Google search results.

A note about favicon format: You have plenty of options:
Your favicon doesn’t have to be in the .ico format. It can be in any supported format, such as jpg, gif, png, and svg. I’ll cover more about svgs later in the post.

Favicon image formats.

2. Robots.txt blocking the favicon:
Google’s documentation states that you should allow crawling of your favicon and your homepage in order for the favicon to be used in Search. If your homepage is blocked by robots.txt, you clearly have bigger issues to worry about than just the favicon. :) But the favicon location could cause problems and be confusing from a robots.txt perspective. For example, some directives in robots.txt can be “greedy” and block more than you think.

I recommend using the robots.txt Tester in Google Search Console to make sure your favicon and homepage can be crawled. It’s a quick test and can save you some frustration. For example, here is a site with a missing favicon and it’s blocking access to the favicon. It’s a bigger brand by the way, so yes, larger companies can make this mistake too.

Favicons and robots.txt problems.

And here’s an interesting side note. Google has a specific crawler for favicons called Google Favicon. You can check the Googlebot documentation for the user-agent string. Google will use this crawler to check your favicon when you request indexing of your homepage via Google Search Console (GSC). And the crawler will ignore robots.txt directives when someone requests a recrawl of the homepage based on a favicon change.

For example, this is directly from the favicon documentation:
“The Google Favicon crawler ignores robots.txt rules when the crawl was requested by a user.”

And here is the crawler user-agent:

Google Favicon user-agent (crawler).

But again, that’s just for Google Favicon to check the new favicon. You still should enable crawling of your homepage and your favicon if you want it to be used in the search results.

3. Duplicate favicon references and one didn’t meet Google’s favicon guidelines:
This is similar to the first issue I covered, but it includes duplicate favicon references in the homepage code (and one didn’t meet the guidelines). I’ve seen situations where one, or more, of the favicon references are to files that don’t meet the requirements and Google just displayed the generic globe favicon instead in the SERPs. So just make sure to double-check all of the references to your favicon from your homepage and make sure they are ok.

For example, this site’s favicon wasn’t showing up correctly. It ended up the homepage had multiple rel=“icon” references and one didn’t meet Google’s guidelines. Fixing that by just having one rel=”icon” reference pointing at the proper file enabled the site to regain its favicon in the SERPs:

Multiple favicon references causing problems.

4. Uh, empty favicon code…
Yes this seems obvious, but I’ve unfortunately seen it in action. If you literally leave out the file in the favicon code, then you will obviously have favicon problems in Search. :) So if you are experiencing favicon problems, then definitely double-check your code. And I also recommend using the various testing tools from Google to check both the static html and the rendered html to make sure your code is correct.

Empty favicon code.

5. Your platform or CMS is botched favicon-wise.
For sites that use a specific platform or CMS to run their site, they may be in a situation where they can’t easily set or customize their favicon. And in situations where you don’t have much control, you are relying solely on the platform or CMS to get it right. And as you can guess, that doesn’t always work out well.

And yes, that means all sites using that platform could have favicon problems. I surfaced this problem recently for a smaller e-commerce platform. Google just isn’t replacing the favicon with the generic globe, it’s literally leaving the favicon blank! This is even worse than receiving the generic favicon in my opinion…

CMS platform causing favicon problems.

And when performing a query that brings up many sites using the platform, you can see the widespread problem. Yep, that’s all of the sites on the platform with missing favicons (not even the generic favicon). And look at the second listing in the SERP… the aspect ratio is messed up for the favicon. So we have a mix of blank favicons and one warped one. Not good.

All sites running the same platform having favicon problems in the Google search results.

Bonus 1: Don’t push the limits with your favicon.
In its documentation, Google has explained that it won’t show any favicon that it deems inappropriate (like pornography or hate symbols). If that’s the case, Google will simply provide the default, generic favicon. Just keep this in mind when crafting a favicon… I’m sure this won’t impact most sites, but it can clearly cause issues with your favicon displaying properly in the SERPs.

Here is what Google explains in their favicon documentation:

Google guidelines for favicons and inappropriate images.

Bonus 2: Create an adaptive favicon that works well in dark mode.
People love dark mode and that includes Google Search. But I find many don’t test how their favicon displays in dark mode.

Once you check out your favicon in dark mode, and if you think it looks less-than-optimal, then you can always create an adaptive favicon that looks great in both light and dark mode. For example, creating an SVG that uses media queries to ensure your favicon adapts to the current environment (light mode versus dark mode).

Adam Argyle wrote a post explaining how to create an adaptive favicon on web.dev where he walks you through the process of creating an SVG that can change based on light versus dark mode. I haven’t tested it out yet, but it’s an interesting technique that seems to work well in the demo. I might try doing that in the near future.

Adaptive favicons

Summary: Put your best favicon, I mean foot, forward in Search with one that actually shows up.
I hope this post helped you understand some of the most common favicon problems I’ve seen while helping site owners that reached out to me for help. With favicons being displayed prominently in the mobile search results, you don’t want a less-than-optimal favicon staring users in the face. And you also don’t want the “meh” generic favicon that Google can provide, or worse, a blank favicon. A few minutes of digging into the situation can usually surface the core favicon problem. And once fixed, you can finally have a favicon that works for you instead of against you. Good luck.

GG

Filed Under: google, seo

  • 1
  • 2
  • 3
  • …
  • 35
  • Next Page »

Connect with Glenn Gabe today!

Latest Blog Posts

  • Analysis of Google’s March 2022 Product Reviews Update (PRU) – Findings and observations from the affiliate front lines
  • How NewsGuard’s nutritional labels can help publishers avoid manual actions for medical content violations (Google News and Discover)
  • What Discover’s “More Recommendations”, Journeys in Chrome, and MUM mean for the future of Google Search
  • How to extend a multi-site indexing monitoring system to compare Google-selected and user-selected canonical urls (via the URL Inspection API and Analytics Edge)
  • Favi-gone: 5 Reasons Why Your Favicon Disappeared From The Google Search Results [Case Studies]
  • Google’s Broad Core Updates And The Difference Between Relevancy Adjustments, Intent Shifts, And Overall Site Quality Problems
  • Google’s December 2021 Product Reviews Update – Analysis and Findings Based On An Extended And Volatile Holiday Rollout
  • The Link Authority Gap – How To Compare The Most Authoritative Links Between Websites Using Majestic Solo Links, Semrush Backlink Gap, and ahrefs Link Intersect
  • How to identify ranking gaps in Google’s People Also Ask (PAA) SERP feature using Semrush
  • What happens to crawling and Google search rankings when 67% of a site’s indexed urls are pagination? [SEO Case Study]

Web Stories

  • Google’s December 2021 Product Reviews Update – Key Findings
  • Google’s April 2021 Product Reviews Update – Key Points For Site Owners and Affiliate Marketers
  • Google’s New Page Experience Signal
  • Google’s Disqus Indexing Bug
  • Learn more about Web Stories developed by Glenn Gabe

Archives

  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • August 2021
  • July 2021
  • June 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • GSQi Home
  • About Glenn Gabe
  • SEO Services
  • Blog
  • Contact GSQi
Copyright © 2022 G-Squared Interactive LLC. All Rights Reserved. | Privacy Policy

We are using cookies to give you the best experience on our website.

You can find out more about which cookies we are using or switch them off in settings.

The Internet Marketing Driver
Powered by  GDPR Cookie Compliance
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

This site also uses pixels from Facebook, Twitter, and LinkedIn so we publish content that reaches you on those social networks.

Please enable Strictly Necessary Cookies first so that we can save your preferences!