The Internet Marketing Driver

  • GSQi Home
  • About Glenn Gabe
  • SEO Services
    • Algorithm Update Recovery
    • Technical SEO Audits
    • Website Redesigns and Site Migrations
    • SEO Training
  • Blog
  • Contact GSQi

NOT taking the (canonical) hint: How to estimate a low-quality indexing problem by page type using Google Analytics, Search Console, SEMrush, and Advanced Query Operators

August 5, 2019 By Glenn Gabe Leave a Comment

During a recent crawl analysis and audit of a large-scale site that was negatively impacted by Google’s core updates, I surfaced an interesting SEO problem. I found many thinner and low-quality pages that were being canonicalized to other stronger pages, but the pages didn’t contain equivalent content. As soon as I saw that, I had a feeling what I would see next… since it’s something I have witnessed a number of times before.

Many of the lower-quality pages that were being canonicalized were actually being indexed. Google was simply ignoring rel canonical since the pages didn’t contain equivalent content. That can absolutely happen and I documented that in a case study a few years ago. And when that happens on a large-scale site, thousands of lower-quality pages can get indexed (without the site owner even knowing that’s happening).

For example, imagine a problematic page type that might account for 50K, 100K, or more pages indexed. And when Google takes every page indexed into account when evaluating quality, you can have a big problem on your hands.

In the screenshot below, you can see that Google was ignoring the user-declared canonical and selecting the inspected url instead. Not good:

Google ignoring rel canonical.

In addition to just getting indexed, those lower-quality pages might even be ranking in the search results for queries and users could be visiting those pages by mistake. Imagine they are thin, lower-quality pages that can’t meet or exceed user expectations. Or maybe they are ranking instead of the pages you intend to rank for those queries. In a case like that, the problematic pages are the ones winning in the SERPs for some reason, which is leading to a poor user experience. That’s a double whammy SEO-wise.

Quickly Estimating The Severity Of The Problem
After uncovering the problem I mentioned above, I wanted to quickly gauge how bad of a situation my client was facing. To do that, I wanted to estimate the number of problematic pages indexed, including how many were ranking and driving traffic from Google. This would help build a case for handling the issue sooner than later.

Unfortunately, the problematic pages weren’t all in one directory so I needed to get creative in order to drill into that data (via filtering, regex, etc.) This can be the case when the urls contain certain parameters or patterns of characters like numerical sequences or some other identifying pattern.

In this post, I’ll cover a process I used for roughly discovering how big of an indexing problem a site had with problematic page types (even when it’s hard to isolate the page type by directory). The process will also reveal how many pages are currently ranking and driving traffic from Google organic. By the end, you’ll have enough data to tell an interesting SEO story, which can help make your case for prioritizing the problem.

A Note About Rel Canonical – IT’S A HINT, NOT A DIRECTIVE
For any site owner that’s mass-canonicalizing lower-quality pages to non-equivalent pages, then this section of my post is extremely important. For example, if you think you have a similar situation to what I mentioned earlier and you’re saying, “we’re fine since we’re handling the lower-quality pages via rel canonical…”, then I’ve got some potentially bad news for you.

As mentioned earlier, rel canonical is just a hint, and not a directive. Google can, and will, ignore rel canonical if the pages don’t contain equivalent content (or extremely similar content). Again, I wrote a case study about that exact situation where Google was simply ignoring rel canonical and indexing many of those pages. Google’s John Mueller has explained this many times as well during webmaster hangout videos and on Twitter.

And if Google is ignoring rel canonical on a large-scale site, then you can absolutely run into a situation where many lower-quality or thin pages are getting indexed. And remember, Google takes all pages indexed into account when evaluating quality on a site. Therefore, don’t just blindly rely on rel canonical. It might not work out well for you.

Walking through an example (based on a real-world situation I just dealt with):
To quickly summarize the situation I surfaced recently, there are tens of thousands of pages being canonicalized to other pages on the site that aren’t equivalent content-wise. Many were being indexed since Google was ignoring rel canonical. Unfortunately, the pages weren’t located in a specific directory, so it was hard to isolate them without getting creative. The urls did contain a pattern, which I’ll cover soon.

My goal was to estimate how many pages were indexed and how many were ranking and driving traffic from Google organic. Remember, just finding pages ranking and driving traffic isn’t enough, since there could be many pages indexed that aren’t ranking in the SERPs. Those are still problematic from an SEO standpoint.

The data would help build a case for prioritizing the situation (so my client could fix the problem sooner than later). It’s a large-scale with many moving parts, so it’s not like you can just take action without making a strong case. Volatility-wise, the site was impacted by a recent core update and there could be thousands (or more) lower-quality or thin pages indexed that shouldn’t be.

With that out of the way, let’s dig in.

Gauging The Situation & The Limits Of GSC
To gauge the situation, it’s important to understand how big of a problem there is currently and then form a plan of attack for properly tackling the issue. In order to do this, we’ll need to rely on several tools and methods. If you have a smaller site, you can get away with just using Google Search Console (GSC) and Google Analytics (GA). But for larger-scale sites, you might need to get creative in order to surface the data. I’ll explain more about why in the following sections.

Index Coverage in GSC – The Diet Coke of indexing data.
The index coverage reporting in GSC is awesome and enables you to view a number of important reports directly from Google. You can view errors, warnings, pages indexed, and then a list of reports covering pages that are being excluded from indexing. You can often find glaring issues in those reports based on Google crawling your site.

Based on what we’re trying to surface, you might think you can go directly to the Valid (and Indexed) report, export all pages indexed, then filter by problematic page type, and that would do the trick. Well, if you have a site with less than 1,000 pages indexed, you’re in luck. You can do just that. But if you have more than 1,000 pages, then you’re out of luck.

GSC’s Index Coverage reporting only provides 1,000 urls per report and there’s no API (yet) for bulk exporting data. Needless to say, this is extremely limiting for large-scale sites. To quote Dr. Evil, it’s like the Diet Coke of indexing data. Just one calorie… not thorough enough.

Search Console API & Analytics Edge
Even though exporting urls from the Valid (and Indexed) category of index coverage isn’t sufficient for larger-scale sites, you can still tap into the Search Console API to bulk export Search Analytics data. That will enable you to export all landing pages from organic search over the past 16 months that have impressions or clicks (basically pages that were ranking and driving traffic from Google). That’s a good start since if a page is ranking in Google, it must be indexed. We still want data about pages indexed that aren’t ranking, but again, it’s a start.  

My favorite tool for bulk exporting data from GSC is Analytics Edge. I’ve written about Analytics Edge multiple times and you should definitely check out those posts to get familiar with the Excel plugin. It’s powerful, quick, reasonably priced, and works like a charm.

For our situation, it would be great to find out how many of those problematic pages are gaining impressions and clicks in Google organic. Since the pages are hard to isolate by directory or site section, we can export all landing pages from GSC and then use Excel to slice and dice the data via filtering. You can also use the Analytics Edge Core Add-in to use regex while you’re in the process of exporting data (all in one shot). More about that soon.

Exporting landing page data from GSC via Analytics Edge

A Note About Regex For Slicing And Dicing The Data
Since the pages aren’t in one directory, using regular expressions (regex) is killer here. Then you can filter using regular expressions that target certain url patterns (like isolating parameters or a sequence of characters). To do this, you can use the Analytics Edge Core Plugin in conjunction with the Search Console connector so you can export the list of urls AND filter by a regular expression all in one macro.

I won’t cover how to do that in this post, since that can be its own post… but I wanted to make sure you understood using regex was possible.

You can also use Data Studio and filter based on regular expressions (if you are exporting GSC data via Data Studio). The core point is that you want to export all pages from GSC that match the problematic page type. That will give you an understanding of any lower-quality pages ranking and driving traffic (that match the page type we are targeting).

Now let’s get more data about landing pages driving traffic from Google organic via Google Analytics.

Google Analytics with Native Regex
In order to find all problematic page types that are driving traffic from Google organic, fire up GA and head to Acquisition, All Traffic, and then Source/Medium. This will list all traffic sources driving traffic to the site in the timeframe you selected. Choose a timeframe that makes sense based on your specific situation. For this example, we’ll select the past six months.

Then click Google/Organic to view all traffic from Google organic search during the timeframe. Now we need to dimension the report by landing page to view all pages receiving traffic from Google organic. Under Primary Dimension, click Other, then Commonly Used, and then select Landing Page. Boom, you will see all landing pages from Google organic.

Dimension by landing page.

But remember, we’re trying to isolate problematic page types. This is where the power of regular expressions comes in handy (as mentioned earlier). Unlike GSC, Google Analytics natively supports regex in the advanced search box, so dust off those regex skills and go to town.

Let’s say all of the problematic page types have two sets of five-digit numbers in the url. They aren’t in a specific directory, but both five-digit sequences do show up in all of the urls for the problematic page type separated by a slash. By entering a regular expression that captures that formula, you can filter your report to return just those pages.

For this example, you could use a regex like:
\d{5}.\d{5}

That will capture any url that contains five digits, any character after that (like a slash), and then five more digits. Now all I need to do is export the urls from GA (or just document the number of urls that were returned). Remember, we’re just trying to estimate how many of those pages are indexed, ranking and/or driving traffic from Google. The benefit of exporting is that you can send them through to your dev team so they can further investigate the urls that are getting indexed by mistake.

Filtering landing pages by regex in Google Analytics

Note, you can also use Analytics Edge to bulk export all of your landing pages from Google Analytics (if it’s a large-scale site with tens of thousands (or more) pages in the report. And again, you can combine the Analytics Edge Core Plugin with the GA connector to filter by regex while you are exporting (all in one shot).

Third-party tools like SEMrush
OK, now our case is taking shape. We have the number of pages ranking and driving traffic from Google organic via GSC and GA. Now let’s layer on even more data.

Third-party search visibility tools provide many of the queries and landing pages for each domain that are ranking in Google organic. It’s another great data source for finding pages indexed (since if the pages are ranking, they must be indexed).

You can also surface problematic pages that are ranking well, which can bolster your case. Imagine a thin and/or lower-quality page ranking at the top of the search results for a query, when another page on your site should be there instead. Examples like this can drive change quickly internally. And you can also see rankings over time to isolate when those pages started ranking, which can be helpful when conveying the situation to your dev team, marketing team, CMO, etc.

For example, here’s a query where several pages from the same site are competing in the SERPs. You would definitely want to know this, especially if some of those urls were lower-quality and shouldn’t be indexed. You can also view the change in position during specific times.

Viewing lower-quality pages ranking in the SERPs via SEMrush

For this example, we’ll use one of my favorite SEO tools, SEMrush. Once you fire up SEMrush, just type in the domain name and head to the Organic Research section. Once there, click the Pages report and you’ll see all of the pages that are ranking in Google from that domain (that SEMrush has picked up).

Note, you can only export a limited set of pages based on your account level unless you purchase a custom report. For example, I can export up to 30K urls per report. That may be sufficient for some sites, while other larger-scale sites might need more data. Regardless, you’ll be gaining additional data to play with including the number of pages ranking in Google for documentation purposes (which is really what we want at this stage).

You can also filter urls directly in SEMrush to cut down the number of pages to export, but you can’t use regex in the tool itself. Once you export the landing pages, you can slice and dice in Excel or other tools to isolate the problematic page type.

Query Recipes – Hunting down rough indexing levels via advanced search operators
OK, now we know the number of pages indexed by understanding how many pages are ranking or receiving traffic from Google. But that doesn’t tell us the number of pages indexed that aren’t ranking or driving traffic. Remember, Google takes every page indexed into account when evaluating quality, so it’s important to understand that number.

Advanced query operators can be powerful for roughly surfacing the number of pages indexed that match certain criteria. Depending on your situation, you can use a number of advanced search query operators together to gauge the number of pages indexed. For example, you can create a “query recipe” that surfaces specific types of pages that are indexed.

It’s important to understand that site commands are not perfectly accurate… so you are just trying to get a rough number of the pages indexed by page type. I’ve found advanced search queries like this very helpful when digging into an indexing problem.

So, you might combine a site command with an inurl command to surface pages with a certain parameter or character sequences that are indexed. Or maybe you combine that with an intitle command to include only pages with a certain word or phrase in the title. And you can even combine all of that with text in quotes if you know a page type contains a heading or text segment in the page content. You can definitely get creative here.

If you repeat this process to surface more urls that match a problematic page type, then you can get a rough number of pages indexed. You can’t export the data, but you can get a rough number to add to your total. Again, you are building a case. You don’t need every bit of data.

Here are some examples of what you can do with advanced query operators:

Site command + inurl:
site:domain.com inurl:12/2017
site:domain.com inurl:pid=

Site command + inurl + intitle
site:domain.com inurl:a5000 intitle:archive
site:domain.com inurl:tbio intitle:author

Site command + inurl + intitle + text in quotes
site:domain.com inurl:c700 intitle:archive “celebrity news”

Using advanced query operators enable you to gain a rough estimate of the number of pages. You can jot down the number of pages returned for each query as you run multiple searches. Note, you might need to run several advanced queries to hunt down problematic page types across a site. It can be a bit time-consuming, and you might get flagged by Google a few times (by being put in a “search timeout”), but they can be helpful:

Using advanced query operators to hunt down low-quality pages that are indexed.

Summary – Using The Data To Build Your Case
We started by surfacing a problematic page type that was supposed to be canonicalized to other pages, but was being indexed instead (since the pages didn’t contain equivalent content). Google just wasn’t taking the hint. So, we decided to hunt down that page type to estimate how many of those urls were indexed to make a case for prioritizing the problem.

Between GSC, GA, SEMrush, and advanced query operators, we can roughly understand the number of pages that are indexed, while also knowing if some are ranking well in Google and driving traffic. In the real-world case I just worked on, we found over 35K pages that were lower-quality and indexed. Now my client is addressing the situation.

By collecting the necessary data (even if some of it is rough), you can tell a compelling story about how a certain page type could be impacting a site quality-wise. Then it’s important to address that situation correctly over the long-term.

I’m sure there are several other ways and tools to help with understanding an indexing problem, but this process has worked well for me (especially when you want to quickly estimate the numbers). So, if you ever run into a similar situation, I hope you find this process helpful. Remember, rel canonical is just a hint… and Google can make its own decisions. And that can lead to some interesting situations SEO-wise. It’s important to keep that in mind.

GG

Filed Under: google, google-analytics, seo, tools, web-analytics

Google’s Core Algorithm Updates and The Power of User Studies: How Real Feedback From Real People Can Help Site Owners Surface Website Quality Problems (And More)

July 2, 2019 By Glenn Gabe Leave a Comment

Google just rolled out another broad core algorithm update on June 3 (which was preannounced by Google’s Danny Sullivan.) And once again, the core ranking update was big. It wasn’t long before you could see significant impact from the update across sites, categories, and countries. Some sites surged, while others dropped off a cliff. And that’s par for the course with Google’s core updates.

For example, here are three examples of drops from the June 2019 Google Core Update:

But I’m not here to specifically cover the June update. Instead, I’m here to cover an extremely important topic related to all broad core ranking updates – conducting user studies. It’s something I have mentioned in a number of my posts about major algorithm updates, and Googlers have mentioned it too by the way. More on that soon.

My post today will cover the power of user studies as they relate to core ranking updates, and provide feedback from an actual user study I just conducted for a site impacted by several major updates. By the end of the post, I think you’ll understand the value of a user study, and especially how it ties to Google’s core updates by gaining feedback from real people in your target audience.  

Google: Take A Step Back And Get Real Feedback From Real People:
After broad core updates roll out, like the December 2020 core update or May 2020 core update, Google’s John Mueller is typically pummeled with questions about how to recover, which factors should be addressed to turn things around, etc. And as I’ve documented many times in my posts about core updates, there’s never one smoking gun for sites negatively impacted. Instead, there’s typically a battery of smoking guns. John has explained this point many times over the years and it’s incredibly important to understand.

But beyond just taking a step back and surfacing all potential quality problems, John has explained another important point. He has explained that site owners should gain objective feedback from real users. And I’m not referring to your spouse, children, coworkers, top customers, etc. I’m talking about feedback from objective third parties. i.e. People that don’t know your site, business, or you before visiting the site.

When you conduct a study like that, you can learn amazing things. Sure, some of the feedback will not make you happy and will be hard to take… but that’s the point. Figure out what real people think of your site, the user experience, the ad situation, the content, the writers, etc. and then form a plan of attack for improving the site. It’s tough love for SEO.

Here is one video of John explaining that site owners should gain feedback from objective third-parties (at 13:46 in the video). Note, it’s one of several where John explains this:

Conducting User Studies Through The Lens Of Google’s Core Updates:
When you decide to conduct a user study in order to truly understand how real people feel about a site, it’s important to cover your bases. But it can be a daunting task to sit back and try to craft questions and tasks for people that will capture how they feel about a number of core site aspects. As I explained above, you want to learn how people really feel about your content-quality, the writers, the user experience, the advertising situation, trust-levels with the site, and more. So, crafting the right questions is important.

But where do you even begin??

Well, what if Google itself actually crafted some questions for you? Wouldn’t that make the first user study a lot easier? Well, they have created a list of questions… 23 of them to be exact. And they did that in 2011 when medieval Panda roamed the web.

The list of questions crafted by Amit Singhal in the blog post titled More guidance on building high-quality sites provides a great foundation for your first user study related to Google’s core algorithm updates.

Google’s blog post from 2011 containing 23 Panda questions.

For example, the questions include:

  • Would you trust the information presented in this article?
  • Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?
  • Would you be comfortable giving your credit card information to this site?
  • Does the article provide original content or information, original reporting, original research, or original analysis?
  • Does the page provide substantial value when compared to other pages in the search results?
  • How much quality control is done on content?
  • And more…

As you can see, these are incredibly important questions to review. The questions can absolutely help you better understand how real users are experiencing your site, how they feel about your site, and ultimately, the questions can help craft a remediation plan covering what you need to change or improve on your own site.

I have used these questions (or variations of them) to run both quick and dirty user studies, and formal studies. The feedback you can receive is absolutely gold. Not just gold, but SEO gold in the age of broad core ranking updates.

Let’s face it, this is exactly the type of information that Google is trying to evaluate algorithmically. So, although it’s not easy to run user studies, and it can be time-consuming and tedious, it’s one of the most important things you can do as a site owner.

Beyond The 23 Panda Questions, More Ideas From The Quality Rater Guidelines (QRG)
The Panda questions provide a great foundation, but you can absolutely run more user testing using Google’s Quality Rater Guidelines (QRG) as your foundation. And there are a boatload of topics, ideas, and questions sitting in the 166-page guide that Google uses with its own quality raters.

Google’s Quality Rater Guidelines (as of May 2019)

I won’t spend too much time on the QRG in this post, but some topics you might test include:

  • Page quality.
  • Needs-met ratings.
  • Usability.
  • Monetization and Advertising.
  • Authors, writers, review boards.
  • Site reputation.
  • E-A-T.
  • User intent.
  • And more…

Now, you can just trust me (and John) and think that user testing is important, or you might want more information. For example, like seeing examples of what you can really learn from a user study. Well, I’ve got you covered. I just conducted a user study for a site that was heavily impacted by the March core update (and that has seen major volatility during several core updates over the years). The feedback we received from the user study was awesome and I’m going to share some of it with you (without revealing the site). I think you’ll get the power of user studies pretty quickly.

User Testing Results: What You Can Learn From Real People: Health/Medical Case Study
Again, the site has seen big swings (up and down) during various core updates and I’ve been helping them identify all potential quality problems across the site (including content-quality, technical SEO, user experience, advertising situation, site reputation, UX barriers, and more).

After fully auditing the site, I used the Panda questions mentioned earlier as the foundation for the user study and tailored some of those questions for the niche and site. Below, I’ll provide some of things we learned that I thought were extremely important for my client to understand. Remember, this is real feedback from real people.

Test-wise, I not only used multiple choice questions, but I also used open-ended questions to learn more about how each user felt about certain situations. In addition, I used a platform that provides session recordings of each user going through the study. For this study I used usertesting.com and I’ll explain more about testing platforms later in this post.

I can tell you that watching and listening to people experience a site is absolutely fascinating. There is so much you can learn from hearing the reaction of users, picking up things they say, and watching how they navigate a site or page.

So, the combination of quantitative feedback, qualitative feedback, and viewing recorded sessions provides the ultimate recipe for surfacing potential problems on a site. And that feedback can directly help site owners craft a remediation plan that goes beyond fixing minor issues. Instead, you can start to address deeper issues and problems. And that’s exactly what Google’s core updates are about… Google is evaluating a site overall and not just looking at one or two factors. Remember, there’s never one smoking gun.

The combination of quantitative and qualitative feedback,
plus recorded user sessions = SEO Gold.

First, some quick background information about the user study:
By the time I was setting up the test, I had already fully analyzed the site and provided many areas for improvement. But, we wanted to gain feedback from real users in the site’s target audience about a number of important topics. Also, I wanted to use the 23 Panda questions as a foundation for the test.

Audience Selection:
Since usertesting.com has a panel of over one million people, I was able to select specific demographic information that enabled us to make sure the test participants were part of my client’s target audience. For example, I was able to select gender, age, household income, if they were parents (and how old their children were), job status, web expertise, and more. I’ll cover more about this later.

Selecting an audience is powerful within usertesting.com’s test creation process.

So, what were some things I wanted to learn from the participants? Here are a few of the things I was interested in:

  • Did users trust the information provided in several articles I asked them to read?
  • Did they think the articles were written by experts, or just people heavily interested in a topic?
  • Was the content original? Or did they think it could easily be found elsewhere on the web?
  • Did they recognize the brand? How about the founders and writers?
  • How did they feel about recency, original publication dates, if the articles were updated, and how that was treated on the page?
  • I asked them to review and provide feedback about the background and experience of the site owners, authors, and the medical review board.
  • I wanted to know if the participants thought there was an aggressive, disruptive, or deceptive advertising situation (since this was a problem when I first started analyzing the site).
  • And more… there were 39 different questions and tasks I had the participants go through.

Below, I’ll cover some pieces of feedback that we thought were extremely helpful. By the way, some of the responses (and video clips) were eye-opening. I’ll provide the details below.

Examples of feedback from the user study (in no specific order):

  • Balance – Several participants mentioned the importance of balance in the article. For example, thoroughly covering the benefits AND risks of certain topics. Again, this is something that can be very important in articles, especially YMYL articles.
  • Triggers – I learned that certain words were triggers for some people, which I could only hear in the video clips. I would have never known that from multiple choice questions. For example, when certain words were read aloud, some participants would react in a way that clearly showed how they felt about that topic. They even said, “whenever I read {enter word here}, that immediately throws up a red flag for me”. Wow, amazing feedback for the site owners.
  • Sources and Credibility – Along the same lines, the sources and citations were extremely important for some of the participants. Some explained that if they see Wikipedia as a source, they immediately become skeptical. One even said it discredits the article. For example, one user said, “wait, so it’s reviewed by a doctor, but it cites Wikipedia… not sure I trust this article at all.”
  • Trust & Reactions – When asked about if a certain participant trusted one of the articles, she laughed out loud. Again, hearing people in the video is incredibly powerful. And laughing is typically not a good thing for a YMYL site. :)
  • Publish Dates – There were several important pieces of feedback regarding publish dates, updated dates, etc. First, some assumed that if there was an updated date on the article, then that meant the entire article had been fully reviewed again. That can be deceptive, since the articles just had specific pieces updated.
  • More About Publish Dates – Some participants absolutely wanted to see the original publish date and the updated date. They did not just want the updated date, since that makes them search for clues about when the article was originally published. Some participants explained the process they go through to find the original publish date, which included checking the sources being cited (and the dates associated with those sources). And then they use a savvy approach of checking the comments for dates.
  • Social Proof – I heard one participant explain how if she sees a lot of comments, then that means it must be a popular website. Very interesting… Comments are tough for many sites due to the onslaught of spam, the time involved in moderating comments, etc., but they do seem important for some people.
  • Author Expertise – Several participants wanted to know the background of the writers as they were reading each article. Since the articles they were reading covered health topics, they immediately went into “skeptical mode”. This was important to see and underscores the importance of having experts write the content.
  • Citing Sources – Several participants explained that just a link to a source wasn’t enough for some articles. They wanted to see stats and facts backing up some claims (in the article itself). For example, maybe providing some of the data directly in the article versus just linking out to another article.
  • “Just A Blog…” – I heard several remarks comparing blogs to medical websites. For the health niche, this was very interesting feedback. There was a negative stigma with blogs for some users, especially for health/medical topics.
  • Advertising Situation – Advertising-wise, there were also some interesting pieces of feedback. Remember, there was an aggressive advertising situation when I first started helping the client, so I was extremely interested in hearing what the participants thought of the current ad situation (which has improved, but the site owners haven’t moved as far as I would like them to). I heard one user literally counting the number of ads as she scrolled down the page. 1, 2, 3, wait more, 4, 5. But in a strange twist, she then said the ad situation was fine… She knew there were a number of ads, but didn’t find them distracting. It’s extremely important to make sure the advertising situation is ok, since Google has explained that aggressive ads can impact a site algorithmically over time.
  • Affiliate Marketing – Regarding affiliate links, I did hear, “Are they just trying to sell me something?? Ok, they probably are…” This is something I have brought up to my client during the audit and it’s a tough conversation to have. But remember, Google has explained that there’s a fine balance when delving into affiliate links or affiliate marketing in general. There must be a lot of value added versus monetization. If the scale tips in the wrong direction, bad things can happen Google-wise. So this piece of feedback was extremely important to see/hear directly from users.
  • Author Expertise – When asked about the expertise of the author of an article, the user started scrolling to find the author information and then said, “Wait, it’s a blog… no, I don’t trust the author at all.” I heard this type of comment several times during the user study. More about building a brand and credibility soon.
  • Content-Quality – When asked about original content across the articles, almost all of the users in the study said there was some original content, but some of it could easily be found in other places across the web. Not one person said the content was original. This underscores the importance of tackling subject matter where you can provide original content, ideas, perspectives, etc. If you write about what many others are writing about, the content can be viewed as quasi-original. That’s not good enough for a tough niche.  
  • Content Value – When asked about substantial value from the content compared to other articles on the topic, every one of the users said it was average compared to the others. You clearly don’t want to strive for “average”. You want 10X content. This was great for my client to see. They have strong articles overall, but users saw them as average compared to the competition.
  • Side note: SERP UX – When watching users go to Google and look for a competing article, it was fascinating to see several scroll right by the featured snippet and select something a little farther down the page (in the standard organic results). Sure, this isn’t a large sample size, but just an interesting side note.
  • Site Design – When researching other articles on a topic, a user commented that all the sites look the same. And those sites ranged from some of the top health sites on the web to academic sites to health blogs. Site design, branding, etc. comes into play here and it’s something that I don’t think many focus on enough.
  • Brand Recognition – Regarding brand, every one of the users in the study said they never heard of the site, brand, etc. This is clearly a signal that the site owners need to work on branding. For example, getting the brand out there more via PR, reaching eyeballs beyond their core audience, etc.
  • Recency – For health topics, I heard a user explain they definitely want to see more recent articles on a topic. The article they were reading was a few years old and that didn’t seem sufficient for her. Recency seemed important (but it must actually be recent and not just an “updated on XX” tag slapped on the page).
  • Affiliate marketing – More comments about “they are advertising {enter product here}” while reading an article. So yes, users pick up on affiliate links. Again, the value from the article must outweigh the monetization piece.
  • Citing sources – There were positive comments about certain sources that were cited, like consumer reports, a scientific study, etc. For health articles, I saw users in the video checking the sources at the bottom of the page, which could help build credibility.
  • Medical review board – Overall, the users liked that articles were reviewed by a medical review board. I heard this several times while reviewing the recorded sessions of participants reading the articles.
  • Expertise and Credibility – When asked about the expertise and background of the site owners, authors, and medical review board, there were plenty of interesting comments. For example, having a medical review board with various types of doctors, nutritionists, etc. seemed to impress the participants. But I did hear feedback about wanting to see those credentials as quickly as possible on the page. In other words, don’t waste someone’s time. Don’t be too cute. Just provide the most helpful information that builds credibility as quickly as possible.
  • Awards and Accolades – For various awards won, users want a link to see more information about that (or they wanted to see more on the page itself). It’s clearly not good enough in this day and age to simply say you won something. Let’s face it… anyone can say that. They want proof.
  • Trust – When asked if they would be comfortable giving their credit card information to the site, most responded, “I’m not sure I would go that far…” or “No, definitely not”. So, there were clearly some breakdowns with trust and credibility. I saw this throughout various responses in the study. My client has some work to do on that front.
  • UX barriers – I noticed errors pop up twice while reviewing the video clips of users going through the site. If these are legit errors, then that’s extremely helpful and important to see. I passed the screenshots along to my client so their dev team could dig in. It’s just a secondary benefit of user testing (with video recordings of each session).
  • And there were many more findings…

As you can see, between reading their responses, hearing their reactions, and then watching each video session, we gained a ton of amazing feedback from the user study. Some of the feedback was immediately actionable, while other pieces of feedback will take time to address. But overall, this was an incredible process for my client to go through.

User Testing Platforms – Features & User Panel
If you just read the sample of findings above and are excited to conduct your own user study, you might be wondering where to start. Well, there are several important things to consider when preparing to launch a user study. The first is about the platform you will use.

Usertesting.com is probably the most well-known platform for conducting user studies and it’s the one I used for this test. I was extremely impressed with the platform. The functionality is killer and their panel of over one million people is outstanding.

In addition, participants sign a non-disclosure agreement (NDA), which can help reduce the chance of your test getting shared publicly. Some sites wouldn’t care about this, but others would care. For example, I know a number of my clients would NOT want the world knowing they are running a user study focused on trust, quality, advertising situation, etc.

Audience-wise, I was able to select a range of criteria for building our target audience for the user study (as covered earlier). This enabled me to have participants that were closely tied to my client’s target audience. It’s not perfect, but can really help focus your audience.

Functionality-wise, you can easily create multiple choice questions, open-ended questions, etc. You can also use Balanced Flow to send users through two different test flows. This can enable you to test different paths through a site or different customer experiences.

Here are some screenshots from the test creation process:

Pricing-wise, usertesting.com isn’t cheap… but could be well worth the money for companies that want to perform a number of user tests (across a range of actions). Remember, the sky’s the limit with what you can test. For example, site design, usability, features, content-quality, site trust, and more. I was ultra-impressed with usertesting.com.

Beyond usertesting.com, I also looked into UsabilityHub (Google is a client of theirs btw) and userlytics. I have not used these other platforms, but they could be worth looking into since they also have large panels of users and what seems to be strong features.

Closing Tips and Recommendations:
Before ending this post, I wanted to provide some closing tips and recommendations when setting up your first test. I am by no means an expert on user testing, but I have learned some important lessons while crafting tests:

  • First, user testing is not easy. It can be time-consuming and tedious (especially when analyzing the results). Build in enough time to craft your questions and flow, and then enough time for fully analyzing the results. You might be surprised how much time it takes to get it right.
  • For Google’s core updates, you can definitely use the 23 Panda questions as a foundation for your test. You also might take a subset of those questions and then tailor them for a specific niche and site. After that, you can use the Quality Rater Guidelines as a foundation for additional tests.
  • Try to not ask leading questions. It’s very hard to avoid this… but don’t sway the results by leading someone down a certain response path.
  • Session recordings are killer. Make sure you watch each video very carefully. I’ve found you can pick up some interesting and important things while watching and listening to users that are trying to accomplish a task (or just while they are reviewing a site).
  • Take a lot of notes… I had a text editor up and running so I could timestamp important points in the videos. Then it was easy to go back to those clips later on while compiling my results.
  • Try to gain both quantitative and qualitative feedback from users. Sure, multiple choice questions are great and can be quick and easy, but open-ended questions can yield important findings that might not be top-of-mind when crafting your test. And then layer on videos of each session, and you can gain a solid view of how real users view your site, content, and writers.
  • Find the right balance for the number of participants. Usertesting.com recommends up to 15 participants for a test. Don’t overload your test, which can lead to data overkill. Try different numbers of participants over a series of tests to see what yields the most valuable results. For some tests, 5 participants might be enough, while other tests might require 15 (or more).  

Summary – User testing can be a powerful tool for sites impacted by Google’s core ranking updates
Google has explained many times that it is looking at many factors when it comes to broad core ranking updates. That includes content-quality, technical SEO, user experience (UX), advertising situation, E-A-T, and more. Google’s John Mueller has also explained that it’s important to take a step back and objectively analyze your site.

Well, a great way to objectively analyze your site is by conducting user testing. Then you can have objective third-parties go through your site, content, features, etc., and provide real feedback. I’ve found this process to be extremely valuable when helping companies impacted by major algorithm updates since it can surface qualitative feedback that is hard to receive via other means. I recommend trying this out for your own site (even if you haven’t been impacted by core updates). I think you’ll dig the results. Good luck.

GG

Filed Under: algorithm-updates, google, seo, tools

Beyond The 1K Limit – How To Bulk Export Data From GSC By Search Appearance Via Analytics Edge (including How-to, Q&A, and FAQ)

June 11, 2019 By Glenn Gabe Leave a Comment

Google has been releasing new features in the search results more and more recently that can have a big impact on SERP treatment, click-through rate, and potentially traffic. Three of those features are part of Google’s “best answer carousels” and include Q&A, How-to, and FAQ snippets. Q&A has been live for a while already, while How-to and FAQ were just rolled out during Google I/O. Note, you can read my post about How-to snippets to learn more about how they work, what they look like, etc.

I have several clients heavily using these formats, so I’ve been analyzing their performance via Google Search Console (GSC) recently — via the new enhancement reports and the Performance reporting. For example, once you start marking up pages using Q&A, How-to, or FAQ markup, you will see new reports show up under the Enhancements tab in GSC. And those reports can be very helpful for understanding errors, warnings, and valid pages.

How To Analyze Performance In GSC By Search Feature:
From a clicks, impressions, and CTR standpoint, you can check the Performance report to view your data over time. Note, if you have Discover data, then there will be two reports under Performance. The first will say Search Results and the second will be titled Discover.

Once in the Performance report (or the “Performance in search results” report), the Search Appearance tab enables you to drill into data by specific feature. You can see that the site from earlier has both How-to and Q&A results. If you click each category title, then you will be isolating that search feature in the reporting (i.e. the reports will be filtered by that search feature). So, you can view queries, landing pages, etc. for just Q&A or How-to results. This applies to AMP as well.

The 1K Row Limit In GSC. The Bane Of A Site Owner’s Existence
Filtering the reporting by search feature is powerful, but remember, GSC only provides one thousand results by report in the web UI and you can only export those one thousand results. For smaller sites, that should be fine. But for larger-scale sites with thousands, tens of thousands, or more listings, then the reporting can be extremely limiting.

For situations like that, what’s a site owner to do??

Analytics Edge To The Rescue Again. Exporting Beyond 1K Results By Search Feature:
I’ve written several posts about Analytics Edge before, and it’s still my go-to tool for exporting data in bulk from GSC. It’s a powerful Excel plugin that enables you to quickly and efficiently export bulk data from GSC, GA, and more.

Below, I’ll take you step-by-step through exporting your data by Search Appearance from GSC. If you’re a large-scale site that’s using Q&A, How-to, FAQ, and even AMP, then you’re going to dig this. Let’s jump in.

How to use Analytics Edge to bulk export data by Search Appearance:
Note, this will be a two-phase approach. The first run will enable us to pull all Search Appearance codes for the specific property in GSC. Then the second run will enable us to pull all data by that Search Appearance code.

Phase One:

  1. Download and install the Analytics Edge free or core add-in. There’s a free trial for the core add-in if you wanted to simply test it out. But the free add-in will work as well (just with less functionality). After installing the add-in, you should register it in Excel.
  2. Next, install the Search Console connector by clicking the License/Update button in the menu. You can watch this short video to learn how to install connectors. You can click the Google Search row to pull up the connector details (where you can choose to install that connector).
  3. Once you install Analytics Edge and the Search Console connector, access the options in the Analytics Edge menu at the top of Excel. Click the Google Search drop-down and select Accounts. This is where you will connect Analytics Edge with the Google account(s) you want to download data from. Go through the process of connecting the Google account you want to work with. You can also make one account the default, which will save you time in the future.
  4. Once you connect your account, click Google Search in the Connectors section, and then Search Analytics.
  5. Name your Macro and click OK.
  6. Select an account and then a property from GSC.
  7. We will use a two-phase approach for exporting data by Search Appearance. First, we are going to view the various options we have under Search Appearance (there will be codes that show up representing each SERP feature available to a property). We will use these codes during our second run to pull all data for each specific search feature (like How-to, Q&A, FAQ, AMP, etc.)
  8. Under the Fields tab, select searchAppearance, which will move that option to the Dimensions window.
  9. For the Dates tab, you can leave “Last 3 Months” active (which is the default).
  10. Leave everything else the same and click “Finish”.
  11. Analytics Edge will return all of the possible Search Appearance codes for the site for the time period you selected. For example, in the screenshot below, there were impressions and/or clicks for AMP (article and non-rich results), Q&A, How-To, and others for the property I selected.
  12. Copy the codes from the searchAppearance column to a text file so you can reference them in phase two of our tutorial. You will need these codes to export all data by that specific search feature.

Phase Two:

  1. Now we are going to use the searchAppearance codes to export data in bulk for a specific search feature. Click Google Search again, and then Search Analytics. Choose an account and a property again. When you get to the options screen, select the dimensions you want to export (in the Fields tab). For this example, let’s select query (to see the queries yielding How-to snippets in the search results).
  2. Next, go to the Filters tab and find the Appearance field. In that field, enter TPF_HOWTO, which is the code for How-to snippets. If you want to export data for another search feature, just use that code instead. Exporting how-to snippets from GSC.
  3. Next, select the dates you want to run the report for. For this example, I’ll select “Last 28 days”.
  4. Then under Sort/Count, select clicks and then descending in the “sort by” dropdown. This will sort the table by the queries with the most clicks over the past 28 days (that yield How-to snippets).
  5. Then click “Finish”.
  6. Analytics Edge will run and export all of the queries yielding How-to snippets. This can take a bit of time (from a few seconds to a minute or longer) depending on how large the site is and how much data needs to be exported. Note, just a sample of data will be presented in memory in the worksheet (and highlighted in green). You need to “write to worksheet” to show all of the data.
  7. To do that click File and then “Write Worksheet”. Name the worksheet and click OK. You will now see a new worksheet containing all of your data. For this example, I see 26K+ queries that have yielded How-to snippets over the past 28 days. Yep, over 26K!
  8. Congratulations! You just exported search feature data from GSC and blew by the 1K row limit!

Tracking, Using, and Learning From The Data:
Once you export your data by Search Appearance in bulk, you will have full access to all of the queries yielding Q&A, How-To, FAQ snippets, AMP, and more. You can track their position, double check the SERPs to understand the SERP treatment for each feature, understand the click-through rate for each snippet, and more.

For example, you might find that “list treatment” for How-tos is yielding a higher click through rate than the carousel treatment. Or you might find that a certain How-to has a featured snippet in addition to the How-to snippet (a How-to/featured snippet combo). And then you can check metrics based on that situation. You get the picture!

You can see an example of a How-to/featured snippet combo below:

A How-to/featured snippet combo.

My recommendation is to export data by each search feature for the past 28 days and start digging into the data. Then regularly export the data (like weekly) to understand the changes over time. E.g. changes in metrics, SERP treatment, and more.

Summary – Free Yourself From the 1K Row Limit In GSC By Exporting SERP Feature Data Via Analytics Edge
Now that more and more features are hitting the SERPs, using a tool like Analytics Edge can help you export all of your data, versus just one thousand rows per report. And when you export all of the queries and landing pages per SERP feature, you can glean more insights from the data. If you are using AMP, How-to, Q&A, or FAQs, then I highly recommend exporting your data via a tool like Analytics Edge. I think you’ll dig it.

GG

Filed Under: google, seo, tools, web-analytics

Exploring Google’s New How-to Snippets In Search And On Smart Displays: SERP Treatment, Fresh GSC Data, Video Templates, Monetization, and more

May 14, 2019 By Glenn Gabe Leave a Comment

Google is always working to enhance features in the search results (SERPs). Over the years, we have seen the SERPs change from ten blue links to a mix of images, videos, featured snippets, one-box results, immersive mobile results, and more. There are many features beyond ten blue links now, which was covered by John Mueller at Google I/O last week.  

Some of those new features are minor additions, while others rock the SERP. Well, there’s one new feature that was just released at Google I/O, and it’s a multifaceted feature that gives us a glimpse into the future of Google. It’s called How-to snippets and I’ll cover more about this new feature below.

How-To Snippets: Part of the answer carousel trifecta
Last year at the Google Dance in Singapore, Google announced three new formats for what they call “best answer carousels”. The three features were Q&A, How-to, and FAQ. They are powered by structured data and help users receive rich answers to their questions.

Q&A rolled out already, while How-to has been in testing for several months. Both How-to and FAQ were released last week at I/O, but my focus for this post is on How-to snippets. I believe they are an extremely important feature that impact the search results, and beyond. More about that soon.

There are several sites I work with that have implemented How-to markup (including thousands of How-tos in total) and I’ve been digging deeper to learn more about the new search feature, how it’s being implemented, what it looks like in the SERPs and on Google Assistant smart displays, and how it will impact publishers. Again, it’s live in the SERPs today, so you will definitely see more of How-to snippets as time goes on.

How-to snippets – Providing step-by-step instructions directly in the SERPs and on Google Assistant Smart Displays
First, if you’re on the “featured snippets are taking traffic away from publishers” side of the SEO debate, then you might want to brace yourself. How-to snippets can take up a lot of real-estate and provide a number of steps along the way (directly in the SERPs and on smart displays). As of today, you can see How-tos in the search results, but they have not been rolled out to smart displays yet. That’s coming later this month.

In the SERPs, Google isn’t providing all of the steps from the How-to, but the SERP feature does take up significant real-estate (with visuals when they are provided).

That’s fine for now, but on Google Assistant smart displays, you can walk through all of the steps in the How-to without visiting the site in question. And if you’re monetizing via eyeballs on your own site, then your revenue-warning antenna should have just gone up… :)

Remember, Assistant is extremely important to Google, including its booming smart displays like Google Home Hub, which was just renamed to Google Nest Hub.

Also, Google could expand the number of steps in the SERPs at any point for How-to snippets. So instead of just providing some of the steps like it does now, it could potentially include more (or all) of them. And if that happens, it could throw a major wrench into monetization, email signups, or any other conversion or micro-conversion that would occur on a site.

SERP Treatment: Show me the snippets!
From a SERP treatment standpoint, there are two core treatments as of today. You can see a list version that uses an accordion to reveal information for each step. And below the steps in the list version, you will see one link to visit the site for all of the steps.

Note, these are not all clients of mine… these are just How-to snippets I have seen in the wild from sites that have implemented How-to markup.

And the other treatment is when you have photos for each step. When you do, Google can provide a carousel of steps with visuals. Users can tap each element in the carousel to be taken to that step on the publisher’s website. This is accomplished via the How-to markup added to your pages.

Here are some additional examples of what How-to snippets look like in the wild (via sites that have implemented How-to markup):

As you can see, How-to snippets have eye-catching SERP treatment, provide a number of steps involved to complete a task, and could yield visit-less users for sites with the snippets.

For example, imagine there are just a few steps involved with the How-to. Then all of the steps could be listed right in the SERP. Time will tell how much site owners approve of this new feature… and I’m sure their monetization teams will be watching very closely.

Note, site owners could always just remove the markup if they feel they are losing traffic, but then that leaves important SERP real estate for other sites (like competitors). It’s probably going to be a tough decision for many site owners, especially when Google Assistant smart displays begin showing the full How-to.

On that note, here is what How-tos will look like once they arrive on smart displays. The full How-to will be available and users will be able to view the necessary tools, equipment, products, etc., along with each step in the How-to.

Here’s the video from Google I/O (at 7:40 in the video):

How-to Schema and The Rich Results Test
If you have How-to content on your site and you would like How-to snippets, then your first stop should be the Google Search Developers documentation for adding How-to markup. Google provides a wealth of data and code examples for setting up the necessary code.

For testing your markup, you can use the Rich Results Test, which enables you to either test a url or test a code snippet. Once you do, you can preview the How-to in the search results directly from the tool. It’s a great way to see your markup in action:

You can also view a preview of both types of How-to snippets, including thumbnail view (the carousel) and list view:

How-to Carousel
How-to List View

No Site? No Problem! Introducing How-To Video Templates – Repurposing Your How-to YouTube Videos For Search and Smart Displays
Some people have joked that in the future, Google might just pull information from publishers and provide that in the SERPs or on smart displays (no website needed). Well, Google sort of announced that at I/O.

If you have How-to YouTube videos and don’t have a site, then you can use video templates provided by Google to add your How-to information in a structured way. Once uploaded, Google can transform your YouTube video into a How-to that can be displayed in the SERPs or on Google Assistant smart displays. You can find more information about this in the developer document that was published when I/O started.

Future Monetization Opportunities:
As mentioned earlier, the full How-to will be available on smart displays, and it’s not hard to believe that there will be an opportunity for monetization via ads, or links to purchase the necessary materials, products, ingredients, etc. Or maybe even sponsored How-tos in the SERPs as a separate ad unit? Anything is possible…

That’s just my opinion, since Google hasn’t announced any monetization opportunities yet. But let’s face it, Google is an ad-driven company and providing advertising in How-tos makes a lot of sense.

Here are two mockups of how Google could provide ads in How-tos (in the SERPS). Smart displays would be clearly different, and could yield ads in between steps, the opportunity to buy materials, and more:

Side Note: How-To + Featured Snippets = MASSIVE SERP Coverage
If you’re wondering about how featured snippets impact How-tos, yes, you can land both featured snippets and How-to snippets in the same SERP. That’s at least for now… And when you do, you can take over massive SERP coverage. You’ll have the initial featured snippet in position 0 and then a How-to snippet that can rank in the top 10. Just an important side note.

Early How-to Data: GSC Performance Reporting
With the launch of How-to snippets, Google has added a new enhancements report for How-to markup and a search appearance filter in the performance reporting for How-to as well. Using both, you can troubleshoot How-to markup on your own pages, while also seeing traffic data like impressions, clicks, CTR, and position for your How-to snippets.

For example, here is a sample enhancements report listing errors, warnings, and valid How-to markup on a site. This image was from Google’s blog post about How-to snippets:

And here is the performance report with How-to listed as a search appearance filter:

I’m only seeing a few days of data so far, but as you can see, there are plenty of clicks to the site from How-to snippets. The click-through rates are pretty strong across some queries, but also pretty average for others. It really depends on the specific SERP, where the How-to snippet ranks, and which other features are displayed alongside the How-to.

For example, you might be competing with featured snippets, video carousels, interesting finds, and other features that can overshadow How-tos. I’ve even seen local one-boxes or local packs overshadow some How-tos.

Also, urls that rank in both featured snippets and How-tos in the same SERP could have skewed metrics. I believe you will see data for both combined in GSC (which can yield higher CTRs and clicks than if the data was just presented for the How-to snippet). I have to analyze more results to see if that’s the case, but I think I’m seeing that already.

So you’ll definitely want to analyze your own SERPs heavily to determine how much impact those How-tos are having (at least in the SERPs). Smart displays are another story. Here are some stats for two different How-to queries. The first ranks higher than the second, which is impacting CTR. They also have different SERP features competing with them.

And here’s an interesting example with an INSANE click through rate of 73%. This is what can happen when you mix breaking news with a How-to (if that ever fits with your site). It’s still early obviously, but that’s a crazy-high CTR!

Again, How-to has not rolled out to Smart Displays yet, and I’m not even sure how that will be reported in GSC (if at all). There should be additional reporting in my opinion, since it’s a different user experience. Most users will just go through the How-to on the smart display and never reach the site, so there may not be traffic data to report. I asked Google’s John Mueller about this in the last webmaster hangout video and he said he wasn’t sure how that will be reported in GSC. It’s still very new and didn’t know how that will be reported yet.

Here is the video (at 28:22 in the video):

I’m sure publishers would like to know how many times a How-to was triggered via Assistant, how far a person made it through the How-to, and if they did end up visiting the site via the link in the Google Home app (which gets added when a How-to is presented). I’ll update this post after How-tos roll out to smart displays.

That’s a good segue to an important topic – feeding Assistant.

The Big Picture – Feeding Google Assistant
Let’s expand our view of what’s going on and explore why would Google want to provide a list of steps like this directly in the search results. We know Google has already heard from many people that featured snippets could be stealing traffic. So why push the limits here with How-to snippets?

Well, Google Assistant is booming and it’s an important part of Google’s future. The family of Google Home devices has been a big success so far, and many people have at least one in their homes. Actually, many have multiple devices between Home, Home-mini, and Home Hub. And Google hasn’t figured out monetization… yet. They absolutely have to and I’m sure there are many people working on that right now.

Even though assistants like Home and Echo have soared, research shows that most people use them for very basic things. For example, to check the weather, get the news, check a sports score, get a quick answer, etc. There are over 80K skills available now for Alexa and over 4K for Google Assistant, but most people don’t even know those skills exist, how to trigger them, etc. So, all of these devices are still used for rudimentary tasks. Again, for now.

Well, one great way to enhance usage, and possibly open up monetization opportunities, is to provide richer answers in a step-by-step guide. For example, if you are looking for help with fixing your mobile phone, you can view and hear a breakdown of the steps involved, view products and materials that could help you fix the phone (with links to buy them), maybe see some ads along the way, etc. And as mentioned earlier, there are many YouTube How-to videos that could be repurposed into How-tos for smart displays. That’s why Google created the How-To Video Templates for video publishers (mentioned earlier).

Therefore, How-to snippets could feed Google Assistant with a ton of step-by-step guides. And by the way, when Google provides special SERP treatment like featured snippets, review snippets, authorship markup of the past, AMP, How-to snippets, and more, publishers typically clear their schedules to implement the necessary technical changes to receive those features. And I’m sure that’s exactly what’s going to happen here with How-to markup. That means Google Assistant could soon be fed millions of How-tos in a structured format (including videos if publishers use the new How-To Video Templates I covered earlier).

It should be very, very interesting to watch how this goes for publishers implementing How-to markup. Will they be ok with feeding the Google beast without much to gain, or will they rip the markup out in order to gain more traffic to their own sites? I’ll be watching closely, that’s for sure.

Summary – Keep an eye on How-to snippets. You can’t miss them in the SERPs (and soon on smart displays)
How-to snippets in the SERPs could be a big deal for Google, publishers, and users. One thing is for sure, it should be interesting to see how this plays out. For those sites that provide How-to content, you should absolutely have this on your radar and watch the rollout very closely. Your traffic could be on the line. And you should test How-to markup on your own site (including videos).

I recommend digging into How-to schema and getting familiar with how it works, how it should be structured, etc. Remember, this is a bigger and badder sibling to featured snippets, with a smart display spin. It will be hard to miss them in the SERPs and via Assistant. I’ll update this post with more information as the rollout continues and I have more data.

GG

Filed Under: google, mobile, seo, tools

The March 12, 2019 Google Core Algorithm Update – A Softer Side Of Medic, Trust And The Link Graph, Quality Still Matters, And The Importance of the “Kitchen Sink”

March 25, 2019 By Glenn Gabe Leave a Comment

In 2018, we saw three broad core ranking updates that caused massive volatility in the search results globally. Those updates rolled out in March of 2018, August of 2018 (Medic Update), and then late September of 2018. All three were huge updates, which sent some sites dropping off a cliff and others surging through the roof.

Since late September/early October of 2018, we have been waiting for the next big core update. And it finally arrived on March 12, 2019. And the update didn’t disappoint. Once again, there was a ton of movement across sites, categories, and countries. It didn’t take long to see the impact. For example, here is a massive surge in search visibility and a large drop all starting on March 12.

A site that surged during the March 12 core update.
A site dropping off a cliff after the March 12 core update.

Although there was a lot of movement (and chatter) about the health/medical niche again, the update clearly impacted many other categories (just like the Medic update did). For example, I have many sites documented that saw significant movement in e-commerce, news publishers, lyrics, coupons, games, how-to, and more.

Note, I cover a number of topics in this post. To help you navigate to each section, I’ve provided a table of contents below:

  • Google’s comments about the March 12, 2019 update.
  • A full Medic reversal? Not so fast…
  • A softening of the Medic Update.
  • Tinkering with trust via the link graph.
  • Examples of sites that surged.
  • Examples of sites that dropped.
  • A note about site reputation, reviews, and ratings.
  • Taking a “kitchen sink” approach to remediation.

Google’s comments about the March 12 Core Update: Reversals, Neural Matching, and Penguin
Before I get into my analysis, it’s important to know that Google provided some information about the March 12 update to Barry Schwartz at Search Engine Land. Google explained that the latest update on 3/12 wasn’t a full reversal. That makes complete sense based on what I’m seeing. More on that soon.

Google also explained that all of the recent core updates (including this one) had nothing to do with any neural matching updates. They checked each of those updates and NONE lined up with these core updates. That’s incredibly important to understand since there are some that believe the updates had a lot to do with neural matching, which is an artificial intelligence method designed to help Google connect words to concepts.

Google’s comments about neural matching and core updates.

The example they provided last fall when they mentioned neural matching demonstrated that you could search for what the soap opera effect was by describing it in your query, and Google could know what you were referring to. If you want to learn more about neural matching, Google just provided information about the differences between neural matching and RankBrain

Google also explained that these updates had nothing to do with Penguin. Now, that doesn’t mean Google isn’t evaluating links as part of this update… it just means that Google’s Penguin algorithm didn’t have anything to do with the March 12 update. More about links soon.

A Full Medic Reversal! Not so fast…
Based on the impact, it was easy to see that many sites that were impacted by the August update (Medic) saw a change in direction. And some saw a radical change in direction, which led some to believe that the March 12 update was a complete reversal of Medic. That’s definitely not true based on the data I’ve been analyzing (and what many others have seen as well).

For example, there are definitely sites that surged on March 12 that dropped heavily on August 1, but there are some sites that dropped more on March 12 that dropped heavily on August 1. And then there are some sites that surged more on March 12 that increased on August 1. Here is an example of a site that reversed course after getting hit by the Medic update:

Another sites seeing recovery after a Medic hit.

But, here are two sites seeing more movement in the same direction (they were either hit by Medic and dropping more on March 12, or surged during Medic and then gained more).

A site dropping more during the March 12 update after a Medic hit.
A site surging more on March 12 after a Medic surge.

Also, it’s worth noting that there are many sites that saw some improvement on March 12 that had dropped heavily on August 1 (so they experienced a partial recovery and not a full one). And on the flip side, there are some sites that dropped on March 12 after surging in August, but didn’t drop all the way back down to their previous levels. For example, draxe.com surged back on March 12, but not all the way back. There are many sites with trending like that:

draxe.com reversing course on March 12 after a big Medic drop.

Side note: Since I’ve been analyzing the health/medical niche heavily since the August update, it was ultra-interesting to see “the big three” get knocked down a few notches. I’m referring to healthline.com, webmd.com, and verywellhealth.com. In particular, healthline.com and verywellhealth.com experienced some big drops in search visibility. That makes sense, since many other sites in health/medical surged during this update. Based on those surges, it’s only natural that the top three players who were dominating health queries would experience a drop as others see gains. But there may be more to those drops than just that. I’ll cover more about what I’m seeing soon after analyzing many sites that saw movement during this update.

The big 3 sites in health dropped during the March 12 update.
For example, verywellhealth.com lost significant search visibility.

Algorithm Tinkering – Softening From The Medic Update & E-A-T Calculations
I mentioned the idea of a Medic reversal earlier in the post and how I don’t believe there was a full reversal. But that doesn’t mean Google couldn’t have softened some of the algorithms it used in the August update. Again, there were many sites seeing improvement during the March update that got hammered during the August update.

I was able to ask John Mueller this exact question during a recent webmaster hangout video. John gave a vague answer, but he did explain that Google sometimes goes too far with an update and needs to pull it back. He also said this can work the other way around, where Google didn’t go as far as they should, so they strengthen their algos.

Here’s the video of John explaining this (at 20:04 in the video):

A softening of the Medic Update is entirely possible. It was one of the biggest and baddest updates I have ever seen. There were extreme drops and surges in traffic everywhere I looked. And the health/medical niche saw the most volatility (hence the name Medic).

Every time I analyzed a site impacted by the August update, I couldn’t help but think the update had a unique and extreme feel. Some sites got slaughtered when it was hard to see why they would deserve dropping that much (70%+ for some sites). Anyway, the important part of this section is that Google could have tinkered with its algorithms to pull back some of the power of the Medic update. That’s entirely possible and could cause massive volatility when they do that.

That’s a good segue into the most scalable way for Google to tinker with trust, especially for Your Money or Your Life (YMYL) sites and content – I’m referring to links.

The Link Graph and Tinkering With Trust
After analyzing the Medic update, and then the September update, I explained that the best way for Google tinker with trust would be via the link graph. For example, if Google simply increased the amount of power of certain links, while decreasing the power of other links, then that could cause mass-volatility across the web (just like we saw with the August update).

And that’s especially the case with YMYL sites. For example, queries that can “impact the future happiness, health, financial stability, or safety of users”. You can read Google’s Quality Rater Guidelines (QRG) for more information about YMYL sites.

After the Medic update, it was clear that YMYL sites were impacted heavily, and more heavily than other types of sites. After analyzing a number of sites and digging into their link profiles, you could see the gaps between certain sites that surged and dropped (from a link power perspective). For example, I found links from the CDC, CNN, and other extremely powerful domains for sites that surged when comparing to competing sites that dropped.

Let’s face it, you can’t just go out and gain links like that overnight. It led me to believe that Google could be tinkering with trust via the link graph. I included information about that in my post about the September 27, 2018 core update and in a separate LinkedIn post.

Beyond what I just explained, Google released a whitepaper recently that made this even clearer. That whitepaper revealed more information about how Google evaluates expertise, authoritativeness, and trust (E-A-T) algorithmically. And it supports my point about tinkering with trust via links.

Google explained that E-A-T is algorithmically evaluated by several factors, the best-known factor being PageRank (which uses links across the web to understand authoritativeness). Yes, that means Google can use links to evaluate E-A-T, which makes complete sense. Like I’ve said in the past, it’s the most scalable way to tinker with trust…

Going one step further, Google also explained that when it detects YMYL queries, it can give E-A-T more weight. So, that means the right links can mean even more when a query is detected as Your Money or Your Life (YMYL). This could very well be why many YMYL sites were heavily impacted by the August update (Medic). It’s just a theory, but again, makes a lot of sense.

Now with the March 12 update, we very could be seeing an adjustment to the algorithms that are used to evaluate E-A-T. Any sort of power adjustment to specific links can cause mass volatility (especially for YMYL queries). Remember, many YMYL sites saw a lot of movement during the March 2019 update, so it’s entirely possible. And as I explained above, Google’s John Mueller said that Google can dial-down certain algorithms if it believes it went too far.

Examples of Surges and Drops
I have a list of 165 domains that were impacted by the March 12 update, including a number of clients that saw movement (either up or down). I wanted to provide some examples of impact (both positive and negative), including findings based on those surges or drops. I can’t cover everything I’ve seen while analyzing sites that were impacted, but I did want to cover some extremely interesting situations.

Disclaimer: Before we begin, I have to provide my typical disclaimer regarding major algorithm updates. I do not work for Google. I don’t have access to its core ranking algorithm. I don’t have a hidden camera in the search engineers’ cafeteria in Mt. View, and I haven’t infiltrated Google headquarters like Tom Cruise in Mission Impossible. I have been to Google Headquarters in both California and New York, and it was tempting to crawl through the vents to find a computer holding all of Google’s algorithms, but I held back. :)

I’m just explaining what I’ve seen after analyzing many sites impacted by algorithm updates over time, and how that compares with what I’m seeing with the latest Google updates. I have access to a lot of data across sites, categories, and countries, while also having a number of sites reach out to me for help after seeing movement from these updates. Again, nobody knows exactly what Google is refining, other than the search engineers themselves.

Sites that surged:

Health/Medical Site – Taking The “Kitchen Sink” Approach To Remediation
The first site I’ll cover is in the health/medical niche that got hit hard during the Medic update in August of 2018. The site lost over 40% of its Google organic traffic overnight.

Once I dug into the situation, there were a number of problems I was surfacing. I worked with this client for over four months on surfacing all potential problems across the site, including content quality problems, user experience (UX) issues, aggressive monetization, a lack of author expertise for certain types of content, render and performance problems and more.

The site owners have worked hard to make as many changes as possible during the engagement, and they plan to tackle even more of them over time. Overall, a number of important changes were made to improve the quality of the site, increase E-A-T as much as they could, fix all technical SEO problems they could, decrease aggressive monetization, etc.

And then March 12 arrived, and the site began to surge. As of today, it’s up 72% since 3/13 (the first full day of the rollout). It’s a great example of taking a “kitchen sink” approach to remediation. Now, as mentioned earlier, it’s hard to say how much of the surge is based on remediation versus a softening of what rolled out in August (or some combination of both), but it’s hard to ignore all of the changes this site made since then.

E-commerce site surging – The Importance of Being Proactive Versus Reactive
Another client that surged is an e-commerce retailer selling a high-end line of products. I helped them a few years ago during medieval Panda days. They improved greatly during that time and their organic search traffic surged during 2015 and 2016. And they contacted me a few months ago after seeing declines during both the August and September updates in 2018. Unfortunately, the site had gone off the tracks slightly and they were experiencing a downturn in traffic.

So I dug into a crawl analysis and audit of the site and began surfacing any problems that could be hurting them. That included digging into their technical SEO setup, reviewing content quality, user experience (UX), performance, and more.

They tend to move fast with changes, so as I was sending findings through, they were addressing those problems very quickly. For example, there was a massive canonicalization issue across many of their category and product pages. In addition, there were some content problems riddling the site.

In particular, all of their category pages had the dreaded fluff descriptions that some e-commerce retailers provide. You know, forced content that doesn’t really help users… And to add insult to injury, most of those descriptions (which were very long), were partially hidden on the page. Users had to click “read more” to reveal the full description. It was pointless and was clearly there just for SEO purposes.

Even though John Mueller has repeatedly explained that e-commerce sites should NOT do this, and that they don’t need to do this, many still do employ this tactic. My client took a leap of faith and removed most of that content from every category page on the site (many). They replaced it with just a few helpful lines of copy that was crafted for the user. So about 80% of the text was removed from each description.

In total, they made a number of important changes on the site (beyond what I just explained). And on March 12, they began to surge. It was fascinating to watch a number of those category pages jump to #1 for their most competitive queries. Not #2 or #3…. But #1. And their average sale is in the thousands of dollars. Needless to say, they were excited to watch this happen. The site is up 57% since 3/13 (the first full day of the rollout).

An e-commerce category page jumps to #1 after the March 12 update.

B2C Content Site (including information and reviews) – A Competitive Battle
The next example is a battle between two competitors. I’ve helped one of the sites extensively that has experienced a decline over time. They have worked hard on improving the site overall, including nuking a lot of lower quality and thin content from the site, while also improving the user experience. They have experts producing content and have built a very strong brand over the years. Actually, it’s one of the strongest brands in its niche.

The other site has surged over the past few years and had overtaken my client in terms of search visibility. They are a relatively new brand in the space (think years versus decades for my client). Even though they have surged over the past two years, they have many problems across the site from a quality standpoint, which includes aggressive advertising problems, thin content, over-optimization, and more.

Starting in August, the competitor finally started to drop after years of increasing. And during the March 12 update, they dropped even more. Both sites have insanely powerful link profiles with millions of links each (and many from powerful sites in their niche). But my client has cleaned up many problematic things over the past year, while the competitor has all sorts of problems as mentioned above.

E-commerce retailer in a controversial niche – BBB vs. overall reputation
The next example is an interesting one, considering the recent focus on E-A-T. It’s an e-commerce retailer that’s in a controversial niche. It got smoked during the Medic update, almost getting cut in half visibility-wise. The site has an e-commerce store, but also a lot of educational information, tips, etc. There are thousands of user reviews for the site, with a high average rating. The reviews are handled by a third-party service.

During the latest update on March 12, the site absolutely surged. It has regained a good amount of visibility, although it’s not back to where it was prior to the Medic update.

But there’s a reviews dichotomy here, which is interesting. Their user reviews are very strong, but their BBB profile is not. They have an F rating and there are a number of complaints that haven’t been addressed. I’ll touch on BBB ratings later as well, but it’s a great example of Google not using the hard BBB rating, but possibly evaluating reviews from across the web (which makes much more sense).

How-To Content (UGC) – Managing “Quality Indexation”, Improving Technical SEO, and Subdomain Impact
The next site is a large-scale site focused on how-to content that dropped during the Medic update. Since there’s a lot of user-generated content (UGC), there is always the danger of thin or low-quality content getting published at scale. The site has several subdomains targeting different countries.

Once digging into a crawl analysis and audit of the site, I surfaced many different issues across a number of important categories. For example, although there was a lot of 10X content (super high-quality), there was a lot of thin content and low-quality content mixed in. My client moved quickly to determine which pieces of content should be nuked from the site (by either 404ing or noindexing content). They have removed about 100K urls as of today.

Next, there were technical SEO issues that could be causing quality problems. For example, canonical issues, performance problems, some render issues, and more. This client moves very quickly, since their dev team is great. It’s not uncommon for me to send findings through and have them implement changes within a few days (or even quicker).

With the March 12 update, we initially thought the site didn’t see much of a gain. But the devil’s in the details. When you look at overall traffic or search visibility, the site didn’t move very much. But, when you check each subdomain, the trending tells a different story. The international subdomains experienced nice gains during the update, while other pieces of the site either remained stable (or even decreased slightly). It’s a great example of a broad core ranking update impacting sites at the hostname level. More about that next. Here’s how some of the subdomains looked after the March 12 update:

Impact at the hostname level. Subdomains increasing during the March 12 update.
Another subdomain surging during the March 12 update.

Side note about hostname-level impact: Reminder, broad core ranking updates can impact sites at the host level (like the subdomain example I just provided). If you’ve followed medieval Panda closely, then that should sound familiar. Here’s a tweet from Pedro Diaz explaining a conversation he had with Google’s Gary Illyes. Gary explained that algorithms like Panda collect data at the page-level, but act at the hostname-level. That’s exactly what I’ve seen across a number of sites that were impacted by these broad core ranking updates over time. It’s just an interesting side-note:

Examples of sites that dropped:
Large-scale YMYL Health/Medical – Being Proactive After Surging With Every Update Until March 12

The first example of a big drop is a YMYL site that had been doing extremely well in a very competitive niche. They reached out to me in the fall since they wanted to have the site audited to make sure they could avoid damage down the line. In other words, although they were surging, they didn’t really know why. And they wanted to make sure they continued doing well. So, they were proactively seeking SEO assistance versus reactively addressing a hit down the line. That’s always a smart approach.

Health site surging during the Medic update.

I started analyzing the site in February and it didn’t take long to surface some very big problems. With every twist and turn, I was finding many issues including massive thin content problems, low-quality content, JavaScript render problems, canonical issues, and more. I literally couldn’t figure out why this site had been surging so much.

I brought this to my client’s attention very quickly and have reinforced that point several times. They did begin making some changes, but I mentioned that the next update could bring bad news (since I had a good feeling we were close to a big update… since we hadn’t seen a broad core ranking update since September of 2018).

And then March 12 arrived and the site got hit hard. As of today, the site is down 37% since March 13 (the first full day of the rollout). And when checking queries that dropped and their corresponding landing pages, they line up with the problems I have been surfacing. For example, thin content, empty pages, pages that had render issues, so on and so forth.

The site got hit hard during the March 12 update.

Large-scale Health Publisher – no E-A-T, republishing content, aggressive ads
The next example is a YMYL site focuses heavily on health. It surged during the Medic Update and then even more during the September update.

But it just got crushed during the March 12 update. There are a number of problems across the site that are hard to ignore. First, much of the content has no author listed, so it’s impossible to know if it’s written by a doctor or a kid in high school. Author expertise is extremely important, especially for YMYL content.

Second, there’s an aggressive ad problem. There are low-quality ads all over the site, especially mega-loaded at the bottom of the page.

It’s also worth noting that the site uses an m-dot for its mobile content. With mobile-first indexing, it’s extremely important to make sure your m-dot contains the same content, directives, structured data, links, etc. as your desktop version. The mobile version is what Google is using for indexing and ranking purposes. There were definitely gaps between the desktop pages and mobile pages, so that could be contributing to the drop.

For example, here are some problems I surfaced when comparing mobile pages to desktop pages:

Also, and this is important, the site consumes a lot of syndicated content. I’ve mentioned problems with doing this on a large scale before and it seems this could be hurting the site now. Many articles are not original, yet they are published on this site with self-referencing canonical tags (basically telling Google this is the canonical version). I see close to 2K articles on the site that were republished from other sources.

Here is what happened during the March 12 update. The site lost a significant amount of search visibility:

Niche Publisher (International site) – Aggressive and Deceptive Ads, Not Secure, Lacking Author Expertise
The next site I’ll cover is an international publisher focused on a very specific niche. The site was not affected by the Medic update in August, but surged like mad during the September update.

Publisher surging during the September 2018 update.

They increased even more since September and had surpassed 80K users per day. But March 12 ended up being a very bad day for them. The site dropped heavily losing 46% of its Google organic traffic since the update rolled out.

When reviewing the site, there were a few things that stood out immediately. First, there was a big aggressive advertising problem (with certain elements that were deceptive too). For example, the hero image for each article was a large ad right in a core area of the content. I’ll guarantee some users were clicking that image and being whisked off the site to the advertiser site. I’ve mentioned ads like this before many times in my articles about major algorithm updates. Hell hath no fury like a user scorned.

Next, there were aggressive ads weaved into the content. For example, accordion ads that were injected in between paragraphs. They were large and intrusive.

Beyond that, there is no author information at all. For each article published, users (and Google) have no idea who wrote the content. With Google always wanting to make sure they are sending users to the most authoritative posts written by people with expertise in a niche, having no author information is not a good idea.

And last, but not least, the site still hadn’t moved to https. Now, https is a lightweight ranking factor, but it can be the tiebreaker when two pages are competing for a spot in the SERPs. Also, http sites can turn off users, especially with the way Chrome (and other browsers) are flagging them. For example, there’s a “not secure” label in the browser. And Google can pick up on user happiness over time in a number of ways (which can indirectly impact a site rankings-wise). Maybe users leave quickly, maybe they aren’t as apt to link to the site, share it on social media, etc. So not moving to https can be hurting the site on multiple levels (directly and indirectly).

Large-scale Lyrics Website – UX Barriers, Aggressive, Disruptive, and Deceptive Ads + The Same Content As Many Other Sites
The next site I’ll cover is a large-scale lyrics site that has gotten hit by multiple algorithm updates over the years (including when medieval Panda roamed the web). And it had experienced significant volatility in 2018, where it dropped during the March 7, 2018 update, and then surged with the September 2018 update. It was clearly in the gray area of Google’s algorithms. Unfortunately, the site just got hit hard by the March 12 update, cutting some of the gains it made in September of 2018.

When reviewing the site, it was clear there were some problems. First, any site that provides the same exact content as other sites must provide some type of value-add. If not, you are leaving Google with a very hard decision when it’s presented with many options for users searching for the same content available on many sites across the web.

Actually, Google’s John Mueller just covered this again during a recent webmaster hangout video. He explained that if a site contains the same exact content as many others, then they should try to provide as much unique value as possible.

Here is the video (at 45:10 in the video):

This site does not provide a value-add. It’s literally the same lyrics content that’s available on many other sites. As an alternative, some sites provide song meanings, information about the artists, concert information, and more.

Second, there is ultra-aggressive advertising on the site. I’ve mentioned many times in the past the problems with aggressive and disruptive advertising and how sites that employ them often fare during major Google algorithm updates (especially when combined with other problems). This site contained deceptive ads in prominent areas of the content, which I’m sure is infuriating some users.

So, we have a site with aggressive, disruptive, and deceptive advertising, and the same exact content as many other sites on the web (without providing any value-add). The combination is clearly not working for them (as they have ridden the Google roller coaster for years – surging and dropping with many Google algorithm updates).

A Note About Reputation – Focus on overall reputation, not a single score.
I don’t want to spend too much time on this, since this post is already long. We know that Google doesn’t use hard BBB ratings when evaluating sites. I asked John Mueller about that a few months ago. But, that doesn’t mean Google is ignoring overall reputation across the web. There’s a big difference between the two.

After the March 12 update, I checked the ratings and reviews for a number of sites that surged or dropped and found that BBB ratings/reviews often did not correlate with surges or drops (meaning strong BBB ratings would lead to a surge and weak BBB ratings would lead to a decline). That said, overall reputation could be impacting those sites.

For example, I noticed that a large gaming company has a BBB rating of F with many complaints. They didn’t drop at all during this update. Overall, their games have been a massive hit with millions of users. And many of those users love the game (and have reviewed it, blogged about it, shared that across social media, and more). So, it’s a good example of site with an F rating from the BBB, but a different reputation across the entire web.

I also checked a huge e-commerce retailer, which surged during the March 12 update, even though it has 510 complaints via the BBB. But they have a huge following and a much different reputation overall across the web. This also leads me to believe that if Google is using reputation, they are doing so in aggregate and not using third-party scores or ratings.

This topic could yield an entire post… but I just wanted to mention reputation as it relates to these broad core ranking updates. In my opinion, I would focus on overall reputation (which is what you should be doing anyway). And even if you have specific third-party ratings that are low, that doesn’t mean those specific ratings will drag your site down if you have other positive signals across the web.

What Site Owners Can Do – The “Kitchen Sink” Approach To Remediation
My recommendations aren’t new. I’ve been saying this for a very long time. Don’t try to isolate one or two problems… Google is evaluating many factors when it comes to these broad core ranking updates. My advice is to surface all potential problems with your site and address them all. Don’t tackle just 20% of your problems. Tackle close to 100% of your problems. Google is on record explaining they want to see significant improvement in quality over the long-term in order for sites to see improvement. See my examples above for how that can work.

Here are some things you can do now (for improvement down the line). Again, this list should be familiar…

  • Review and address content quality. Surface all thin or low-quality content and handle appropriately. That could mean boosting content that’s low-quality, 404ing that content, or noindexing it.
  • Analyze your technical SEO setup and fix any problems you surface as quickly as you can. I’ve always said that what lies below the surface could be scary and dangerous SEO-wise. For example, canonical problems, render problems crawling problems, and more.
  • Objectively analyze your advertising situation. Do you have any aggressive, disruptive, or deceptive ads? Nuke those problems.
  • Are there user experience (UX) barriers on your site? Are you frustrating users? If so, remove all of the barriers and make your site easy to use. Don’t frustrate users. Remember, hell hath no fury like a user scorned.
  • On that note, are you meeting and/or exceeding user expectations? Review the queries leading to your site and then the landing pages receiving that traffic. Identify gaps and fill them.
  • Make sure experts are writing and reviewing your content (especially if you focus on YMYL topics). Google is clearly looking for expertise when it matches users with content.
  • Build the right links, and not just many links. It’s not about the quantity, it’s about quality. We learned that Google (partly) evaluates E-A-T via PageRank, which is from links across the web. Use a strong content strategy along with a strong social strategy to build links naturally. When you do, amazing things can happen.

Summary – The March 12 Update Was Huge. The Next Is Probably A Few Months Away
Google only rolled out three broad core ranking updates in 2018. Now we have our first of 2019 and it impacted many sites across the web. If you’ve been impacted by the March update, it’s important to go through the steps I listed and look to significantly improve your site over the long-term. That’s what Google wants to see (they are on record explaining this).

Don’t just cherry pick changes to implement. Instead, surface all potential problems across content, UX, advertising, technical SEO, reputation, and more, and address them as thoroughly as you can. That’s how you can see ranking changes down the line. Good luck.

GG

Filed Under: algorithm-updates, google, seo

  • « Previous Page
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • …
  • 32
  • Next Page »

Connect with Glenn Gabe today!

Latest Blog Posts

  • Smart Delta Reports – How To Automate Exporting, Filtering, and Comparing Google Search Data Across Timeframes Via The Search Console API and Analytics Edge
  • Filters and Pills in the Google SERPs – How the addition of filters, tabs, and dynamic organization in the search results can impact visibility and clicks
  • How To Use GSC’s Crawl Stats Reporting To Analyze and Troubleshoot Site Moves (Domain Name Changes and URL Migrations)
  • Google Search Console (GSC) reporting for Soft 404s is now more accurate. But where did those Soft 404s go?
  • Google’s December 2020 Broad Core Algorithm Update Part 2: Three Case Studies That Underscore The Complexity and Nuance of Broad Core Updates
  • Google’s December 2020 Broad Core Algorithm Update: Analysis, Observations, Tremors and Reversals, and More Key Points for Site Owners [Part 1 of 2]
  • Exit The Black Hole Of Web Story Tracking – How To Track User Progress In Web Stories Via Event Tracking In Google Analytics
  • Image Packs in Google Web Search – A reason you might be seeing high impressions and rankings in GSC but insanely low click-through rate (CTR)
  • Google’s “Found on the Web” Mobile SERP Feature – A Knowledge Graph and Carousel Frankenstein That’s Hard To Ignore
  • Image Migrations and Lost Signals – How long before images lose signals after a flawed url migration?

Web Stories

  • Google’s Disqus Indexing Bug
  • Google’s New Page Experience Signal

Archives

  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • GSQi Home
  • About Glenn Gabe
  • SEO Services
  • Blog
  • Contact GSQi
Copyright © 2021 G-Squared Interactive LLC. All Rights Reserved. | Privacy Policy

We are using cookies to give you the best experience on our website.

You can find out more about which cookies we are using or switch them off in settings.

The Internet Marketing Driver
Powered by  GDPR Cookie Compliance
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

This site also uses pixels from Facebook, Twitter, and LinkedIn so we publish content that reaches you on those social networks.

Please enable Strictly Necessary Cookies first so that we can save your preferences!