The Internet Marketing Driver

  • GSQi Home
  • About Glenn Gabe
  • Digital Marketing Services
    • Advanced SEO
    • Paid Search (PPC)
    • Social Advertising
    • Web Analytics
    • SEO Training
  • Blog
  • Contact GSQi

7 Examples of Untrackable Clicks From Google’s Ecosystem of Search, Chrome, and Feeds

February 12, 2019 By Glenn Gabe Leave a Comment

Information is powerful. And for SEOs, it can help inform, guide, and drive change. That’s why we’re always looking for more data from Google to better understand where traffic is coming from, what people are searching for, which elements in the search results are driving those clicks, and more. And that’s also why tools such as Google Search Console (GSC), Google Analytics, and a number of third-party tools are critically important for providing context to search traffic.

On that note, even though there are some outstanding third-party tools, we rely heavily on Google to provide that information. Third-party tools can be extremely helpful, but there’s nothing like getting your data directly from the belly of the beast.

Change Is Constant And So Are The SERPs, Chrome, and More
I recently wrote a post for my Search Engine Land column that demystified the metrics in Google Search Console (GSC). That included how Google calculates impressions, clicks, and position in the search results and how that translates to your reports. My column seemed to resonate with the SEO community, which supports the fact that we are all extremely interested (or maybe obsessed) about data from Google. It also underscores the point that we want to know exactly what we’re looking at in our reports.

So, data is one thing, but the correct interpretation of data is another. That leads me to the core point of this post – untrackable clicks from Google, often from new and interesting SERP features, Chrome, and feeds.

What Are Untrackable Clicks From Google?
Untrackable clicks are visits from SERP features, or Google’s ecosystem of Chrome and feeds, that don’t have specific reporting in Google Search Console (GSC) or Google Analytics. You can often see impressions and clicks for the query and page (if your content is ranking in Search), but the search feature isn’t trackable in Search Console (at least yet).

And for Google’s ecosystem beyond Search (like Chrome and Discover), they don’t have a logical place in GSC yet (and may never), and those visits can show up in Google analytics a number of ways (yet obviously don’t reveal which features they are coming from). It’s all a bit confusing…

Also, context is extremely important for SEOs and site owners when they are analyzing their organic search efforts. So, understanding when listings are ranking in certain SERP features, or within Google’s ecosystem, is important. Then you can better understand how users are getting to your content, analyze the content that’s ranking, and then craft strategies to potentially land more of those rankings.

Before we begin… the importance of posting feedback in GSC
Just to be clear, site owners and SEOs don’t have to sit around and twiddle their thumbs while untrackable clicks rack up in GSC. Collectively, we do have a voice. The Google Search Console team is a smart group of savvy Googlers that wants to provide the best functionality for site owners.

They have recently given us extremely powerful functionality in GSC like the index coverage reporting and the URL inspection tool, and I know they seek feedback from users about what they could add in the future.

So definitely post feedback in GSC when you see something wrong, if you have an idea for new functionality, etc. From what John Mueller has said many times, the team does receive and read your feedback and that feedback can help validate feedback they receive from other places. So, use the feedback form in GSC!

Submitting feedback in Google Search Console (GSC)

Below, I have provided a list of seven examples of untrackable clicks from Google (from the SERPs and from Google’s ecosystem of Chrome and Feeds). Note, I can’t cover every possible form of untrackable click in this post. The list will always be expanding based on innovations from Google… That said, I have provided seven of the top examples below.

Video Carousels and New “Video Packs”
The mobile SERPs are filled with carousels, which makes sense… Carousels provide a slick way to pack more listings into a single search result. One type of carousel that has taken off recently is the video carousel. They started booming in the SERPs in mid-2018 and are present for many queries now (some would argue they are present for too many queries, but that’s for another post).  

Also, I just noticed (and tweeted) that video packs are showing up in place of many carousels in the mobile search results (showing between 4 and 10 videos per block). That’s an interesting move and one you should definitely be aware of if video is important to your efforts.

For now, there’s no specific reporting for video carousels or new video packs in Search Console. The “video” filter in GSC under Search Type will only show you rankings from the video tab in the SERPs (video search), and not video carousels or packs from the core search results (10 blue links).

The video search type in GSC just filters video tab results.

From a video carousel standpoint (which is blended into the SERPs), if a listing shows up in a video carousel, and it’s scrolled into view, then it will register an impression. But again, your listing needs to scroll into view for that to happen. And there’s no filter for video carousels in GSC, so the clicks and impressions show up just like any other clicks and impressions from organic search.

It would be incredible to know when certain urls were ranking in video carousels and then obviously drill into the queries that yielded those rankings.  

From a tracking perspective, a video carousel filter would be very helpful. For example, in GSC’s performance reporting, you can currently filter AMP rich results and other rich results, but you can’t view video carousel data. I think based on the video carousel boom, a filter there would be extremely helpful. See a mockup below of what that could look like:

A video filter under Search Appearance could reveal video carousel or video pack data.

Discover Feed
Discover is a customized feed of news, information, articles, evergreen content, video, and more. It was previously called “the Google Feed” and you can find your Discover feed in the Google app and on the homepage of Google.com on your mobile device (when you are logged in). Google announced that 800M people now use Discover (as of September of 2018).

It’s important to note, that you don’t need to be in Google News to rank in Discover. Google is providing information based on your interests, sites you visited in the past, the topics you have selected to follow, entities you have selected to follow in the SERPs, and more. And it includes evergreen content that’s not new to the web, but might be new to you.

Discover feed in the Google search app.
Google providing more topics that users can follow
based on their interests.

Although many people are browsing Discover (some have even called it their new Instagram feed), and clicking through to articles from Discover, there’s currently no way to track that’s happening. It’s another form of untrackable click from across Google’s ecosystem.

For example, I’ve seen articles from various SEOs I know in my Discover feed, I’ve clicked through to those articles, but they unfortunately have no idea that happened. In Google Analytics, visits from Discover can show up as direct traffic, or they can be attributed to your first visit to the site (which could be organic search). So GA doesn’t really help here.

Understanding traffic in Discover would help SEOs understand when Google has decided to include their content in someone’s feed, how much traffic (or return traffic) they are getting via Discover feeds, and more. It’s not organic search traffic, but could be based on iniitial visits from organic search.

Therefore, some type of tracking in GSC would be incredible. Maybe there could be another report for Discover. Remember, over 800M people use Discover on a regular basis now… Here’s a quick mockup of a new section for Discover data:

Discover tracking in GSC would be an interesting addition.

Interesting finds
I call this the search feature nobody is talking about… yet it’s been popping up more and more recently. When searching Google, you will often see “interesting finds” in the mobile search results as a module containing either three or four articles (with thumbnails). Note, I just noticed three listings showing up versus four and will share more about that on Twitter soon. And if you are using AMP, those AMP urls will rank in the “interesting finds” SERP feature. Notice the AMP icon below for my listings.

A four pack of “interesting finds” (with AMP urls).
Interesting finds with three listings versus four.

“Interesting finds” can also contain a link to ten more articles, which takes you to a Discover-like feed for the topic. It can really grab a user’s attention. That’s where you can see up to ten articles on the topic. And tying this to Discover, you can also follow the topic! Remember when I called this Google’s ecosystem?

10 more articles in “interesting finds” along with
a follow button (for Discover).

The problem is that site owners have no idea users are coming from that SERP feature. The impression and click will show up just like any other listing in the 10 blue links. The position will be the position of the “interesting finds” block. And in Google Analytics, the visit will look like any other visit from Google organic.

From a tracking perspective, it would be optimal to know that your articles are ranking in “interesting finds” for a number of reasons. For example, since Google thinks enough of the content to surface it in the “interesting finds” module for a specific topic, it would be great to analyze that content to help refine your content strategy (and to possibly gain more “interesting finds”).

In addition, it would be great to know when an article drops out of “interesting finds”, which can yield a drop in impressions and clicks. If you can’t track that, then site owners and SEOs could think something else happened that cause the drop in traffic. And that can lead to a lot of confusion.

“More like this” or “Related Pages” in the Google Search App
I’ve been meaning to write a post dedicated to this feature, since I know many don’t realize that it can be driving traffic to their sites. When you’re in the Google search app and you visit a page, you can always click the “More like this” icon in the menu bar. The feature was moved to the top of the app last year, so it’s *sometimes* in a prominent location.

The reason I say sometimes is because I just noticed today that it moved again! Now it’s available when you scroll down a page (you’ll see a “View 10 related pages”, which you can pull up from the bottom of the viewport). See below:

Related Pages feature in the Google App
(when available at the bottom of an article)
Related Pages feature when expanded
(pulled up from the bottom of the viewport).
“More like this” in the Google app
(via an icon at the top of the page)

When clicking that icon, or pulling up the related pages bar, you will see a list of links that Google believes are extremely relevant to the content at hand. It’s basically a form of “related articles”, but from Google and not third-party services. Similar to other features I’m covering in my post, you will never know that users originated from “More like this” listings. You will just see a standard visit in Google Analytics, and since this is from Chrome and not Search, you won’t see impressions or clicks at all in Google Search Console.

It would be great to know how much traffic is coming from this feature and possibly the theme or topic that yielded a “More like this” ranking. And again, it would be great to know when articles dropped out of this feature for various topics, so site owners can be informed about a drop in traffic possibly originating from the “More like this” or “Related Pages” feature in Chrome.  

Featured snippets, facets in featured snippets, and more
Although I was hopeful that Google would eventually roll out native featured snippet tracking in Google Search Console (GSC), that never happened. And with featured snippets taking up large amounts of real estate, and driving a lot of traffic, it’s hard to believe that site owners must rely on third-party tools to know when their content ranks in a featured snippet.

Side note, I recently wrote a post about how to use SEMrush to surface more of your featured snippets than what’s being reported by default in third-party tools. It’s a strong way to go, but let’s face it, having data directly from Google would be optimal.

Beyond just knowing if you are ranking in a featured snippet, it would be great to know the type of featured snippet. Featured snippets come in many forms, so understanding paragraph versus bullets versus tables versus quasi-knowledge panels would be amazing.  

Now, Google’s featured snippet algorithm is extremely temperamental, and featured snippets can change quickly (and often), but it would still be good to have some information from Google about this.

Google is also showing facets in some featured snippets, which trigger additional featured snippets. Try and say that ten times fast. :) There’s obviously no reporting right now for that either. And that can sometimes include video facets too, which trigger suggested clips that play in a lightbox above the SERPs.

Featured snippets facets in action.
When clicking a facet, a new featured snippet is triggered.

All of this would be great for site owners and SEOs to understand to help guide their content efforts. For example, which posts are yielding featured snippets, what types of content work best, which format is best, is a post ranking as a facet versus the default featured snippet, and more about the featured snippet sequence (e.g. are posts moving from the 10 blue links to a facet and then to a featured snippet?)

People Also Ask (PAA)
When searching on Google, you will often see People Also Ask modules, which contain a second level of search listings (presented in a similar format to a featured snippet). When expanding a query in PAA, you will see more information, a link to the destination site, and then an option to search Google for that query. But just like featured snippets, you will have no idea that your content is ranking in PAA modules (not from Google anyway).

People Also Ask cannot be tracked in GSC.
PPA triggers a new listing in a similar format to featured snippets.

If you read my post on SEL about how Google calculates impressions, clicks, and position, then you learned that each listing in a block element will take on the position of the block. And I also explained how listings need to be revealed in order to gain impressions. So theoretically, PAA can actually make your stats look a little funny.

You might show up ranking #2 in PAA, or you might not, since the user didn’t expand your PAA listing. It’s a great reason why adding some type of tracking in GSC for PAA would be helpful. And remember, there’s a form of infinite scroll with PAA. That means your listing could have a position of 2, but it might be the 15th PAA listing. Strange, but true.

Local Pack Listings (GMB)
UTM tracking is getting killed off via the recent change announced for GSC (more on that soon). So local listings will also be harder to track via GSC now. This has been a hot topic recently, since urls with utm parameters have seen some strange reporting in GSC over the past several weeks. Impressions dropped off a cliff, while clicks remained stable. For example, notice the drop in impressions below:

The reporting issue now makes more sense as Google announced it’s consolidating metrics in GSC to canonical urls. So for this situation, the utm urls are being canonicalized to the core urls (or will in the near-future), so their metrics will shift to the canonical urls. It’s not working perfectly yet, but that’s what is going to happen.

So, local SEOs will have no idea when their urls are showing up in the local pack, how much traffic they are driving, etc. One solution that’s been brought up on Twitter was the idea of adding a local filter to the search appearance functionality in GSC within the performance reporting. John Mueller actually liked that idea.

To me, that’s a great solution for the local issue, and for other SERP features I’ve mentioned in this post. And just like I explained earlier, definitely go and submit feedback in GSC if that’s something you would like to see. The GSC product team receives that feedback and it can help validate feedback they are receiving from other sources.

Third-party tools as a stop-gap:
Based on what I’ve explained above, I hope Google can provide some of this data in GSC. Again, I think site owners and SEOs would greatly value understanding where clicks are coming from in order to fine-tune their digital marketing efforts.

Thankfully, we have some outstanding third-party tools in the SEO industry that can at least identify when your site is ranking within certain SERP features. I won’t go too deep here, but I did want to provide a short list containing some of my favorites below. Note, you obviously can’t see traffic leading to your site via these tools, but they can help you understand when your listings are part of certain SERP features.

Which third-party tools track SERP features?
My favorite tools that enable site owners and SEOs to filter rankings by SERP features include:

  • SEMrush (I’ve often called SEMrush the swiss army knife of SEO tools)
  • Ahrefs
  • Sistrix
  • Then there are a number of dedicated rank-tracking tools that can help you surface this information as well. For example, RankRanger and STAT (now part of Moz) can surface a number of SERP features based on keywords you are manually tracking.

Again, you can’t see every query or SERP feature via third-party tools, but they can help you understand more of what’s going on in Google Land.

Summary – Seeking clarity from SERP features and Google’s ecosystem of Chrome and Feeds
As I explained in this post, context is very important. And understanding which features are driving traffic can help provide a clearer picture of traffic from Google Search, Chrome, Discover, and more. I hope this feedback reaches the GSC team and they decide to expand which features are tracked in GSC. That would be incredible and could help many site owners and SEOs better track their efforts. I’m hoping for the best!

GG


Filed Under: google, seo, web-analytics

A Holly Jolly Load Balancer Christmas – How Google Treated A Major Site Performance Problem From A Crawling And Ranking Perspective [SEO Case Study]

January 23, 2019 By Glenn Gabe Leave a Comment

Ah, the holidays. Who doesn’t love a festive atmosphere, ugly Christmas sweaters, eggnog, singing Christmas carols, and major performance problems that can take down your site?

Wait, what was that last part??

The holidays are always interesting for me SEO-wise. Every few years, I receive an SOS from a site owner right around Christmas Day based on a weird drop in traffic or rankings from Google. This year, it was on December 27 from a company I helped several years ago. It’s a large-scale site that has seen a lot of volatility historically, but has done a good job at turning things around. They have battled major algorithm updates in the past, and now battle new SERP features from Google itself.

Just two days after Christmas, I received a message from the site owner asking if there was some type of update over the holidays. Traffic dipped sharply on Christmas Day, which it typically wouldn’t do based on their niche, and had gone down more and more since. Traffic typically only goes down by about 20% on Christmas Day and it was down 50% this year. That’s typically not the case from a historical perspective for the site and they were worried that an algorithm update impacted the site again.

So I rushed to check their stats, noticed the drop in traffic, and quickly started checking the sources of traffic that dropped. The first thing I noticed was that the drop wasn’t just from Google… it was from all sources. That’s actually a good thing and we could quickly rule out some type of Google algorithm update.

Then I started checking the top landing pages that dropped and it wasn’t long before I noticed a serious performance problem. Some of the pages simply wouldn’t load. Then I searched Google for queries leading to the pages and tried to click through from the SERPs. I experienced the same issue… the site was simply hourglassing.

I messaged the site owner and explained the major performance problems I was experiencing and that the drop was from all traffic sources, and not just from Google. He took that information to his CTO so they could dig in and start isolating the problem. It wasn’t long before I heard back.

A Holly Jolly Load-Balancer Christmas!
No, that’s not the name of the next Christmas Vacation movie, although it does have a nice ring to it. :) The site owner and CTO uncovered a load balancer issue that was causing the performance problems we were seeing on the site. And that was causing a big drop in traffic as visitors couldn’t load many of the pages. And again, this is a site that people visit during the holidays, and on Christmas day, based on the type of content the site provides.

So, the good news is that it wasn’t some type of evil elf algorithm update. But the bad news was that the load balancer issue was causing many pages not to load for users and Googlebot. My client went to work on resolving the load balancer issue, but they couldn’t help but wonder how Google would treat the site from a rankings perspective.

For example, when Google sees a major performance problem over time, and it knows users cannot load the pages, some important questions come to mind. Will Google drop the site’s rankings, will the site plummet in the short-term, only to come back when the performance problems are corrected, will there be long-term damage SEO-wise, and how will crawling and indexing be impacted?

We’ve heard Google’s John Mueller explain situations related to this in the past. For example, John explained that Google can slow crawling when it comes across many 5XX errors. Here’s a tweet from John about this:

And that goes for 503s as well (a response code sites can return when maintenance is being completed and the site is down). John has explained that if Google sees 503s for an extended period of time, it can start dropping urls from the index. Note, returning 503s for a short period of time is totally fine (and is a good approach to use when you know your site will be down for maintenance). Just make sure you’re not returning 503s for an extended period of time.

Since this was a load balancer issue, and some visitors were able to load pages correctly, my client didn’t take the site offline (and didn’t use 503s). But it’s important to understand how Google would treat a site being down over an extended period of time. Here’s a video where John explains this (at 37:28 in the video):

And here’s John explaining how Google can automatically slow crawling when it sees that urls are temporarily unavailable. He also explains that Google will stop crawling if it sees that happening for a robots.txt file. Then Google will automatically increase crawling when the urls return 200s again. Here’s the video of John explaining this (at 38:43 in the video):

So what would happen in this case? Needless to say, we were eager to find out.

Tracking The SEO Impact Of Major Performance Problems
As you can probably guess, I was ultra-interested in seeing how this played out SEO-wise. So, I began tracking the situation on several levels to see how the load balancer issue was impacting the site from an SEO standpoint. For example, would rankings drop, would crawling slow down, and how would this all look when the performance problems were fixed?

Below, I’ll take you through the timeline, with data and screenshots, to show you how Google treated the problem. Note, every site is different, so this may not apply to all performance problems you run into. But, it should help you understand how those performance problems could impact crawling, indexing, and ranking. It’s also worth noting that the duration of performance problems would surely impact this situation. For this case, the site owner and CTO completely fixed the problem within seven days. But, it wasn’t all fixed at one time. More on that soon. Last, I unfortunately don’t have logs from the site yet. I’m still working on getting them, though, and will update this post if I do.

Blue Christmas – The Timeline of Events (with data and screenshots)
First, here is the drop in traffic when comparing year over year. The site usually doesn’t drop very much during Christmas Day (or week). You can see a seasonal drop of about 20% when compared to normal weeks, where this year yielded a drop of 50%. Yep, something was up…

YOY trending shows the unusual drop on Christmas Day in 2018.

Crawl Stats in GSC:
Here was the spike in time downloading a page and the drop in pages crawled per day. Pretty amazing to see that graph, right?

Crawl stats in GSC show a spike in time spent downloading a page
while pages crawled per day drop off a cliff.

And here were the impressions and clicks in GSC (which remained strong). This supports the notion that rankings were not being impacted during the performance problem. Impressions would have dropped if that were the case. I was also checking rankings manually and saw rankings remain strong overall. It seems that Google was pretty cool with understanding there was a performance problem and didn’t quickly impact rankings (even when it knew some users were running into page loading problems).

GSC Performance reporting reveals stable impressions and clicks from Google
while the performance problem was in place.

And here was search visibility trending in SEMrush over the timeframe, which supports the fact that rankings were not impacted in the short-term. Search visibility was stable throughout the performance problem:

Search visibility trending in SEMrush is stable throughout the problem (no drop in rankings).

And here is Sistrix trending, which is also stable throughout the problem:

Search visibility trending in Sistrix is also stable throughout the problem.

Note, the load balancer problem was not fixed all at one time. The situation improved over a five to seven-day period. So here are the crawl stats as the issue started to get resolved. Notice crawling increases, but not back to normal. That’s because the performance problems were still there to some extent. Notice time downloading a page improves, but also not back to normal.

Pages crawled per day increase as time spent downloading a page decreases
(as the load balancer problem is being fixed).
Pages crawled per day and time spent downloading a page continue
to improve as the performance problem is fixed.

And then once the problem was completely fixed, notice that the crawl stats return to normal. Time downloading a page drops back down to normal levels and pages crawled per day rise to normal levels. Awesome, all is good again in Google Land.


Pages crawled per day and time spent downloading a page return to normal levels
when the load balancer problem was completely fixed.

And here is what Google organic traffic looked like from before the load balancer issue to after it was resolved. Notice the drop during Christmas, a further drop, and then traffic returns to normal after the problem was fixed. There was no impact to Google organic traffic, rankings, etc. based on the performance problem. That was great to see.

Traffic from Google organic returns to normal once the
load balancer problem was completely fixed.

Key learnings… and Spiked Eggnog:
Needless to say, it was fascinating to watch how Google reacted to the major performance problems over time. Here are some key learnings from the situation:

  • Google understands that temporary glitches can happen (like the performance problems this site was experiencing).
  • Google can slow crawling, when needed, and check back to see when it can crawl more. The crawl stats in GSC clearly showed that happening.
  • For this situation (only a week or so), the site wasn’t impacted SEO-wise. Rankings remained strong, search visibility was stable, and as the problem was fixed, Google returned crawling back to normal levels.
  • Google organic traffic then returned to normal levels as the performance problem was fixed completely. There was no impact rankings-wise based on this incident.
  • That said, if this problem remained beyond a week or two, it’s hard to say how rankings would be impacted. Google could very well begin to drop rankings as it wouldn’t want to send users to a site or pages that don’t resolve. But, if you have a short-term performance issue, it seems you should be ok. I would just work to fix the problems as quickly as you can.
  • This is a reminder that Murphy’s Law for SEO is real. The load balancer problem happened on Christmas Day. You just need to be prepared for Murphy to pay a visit. And then move quickly to rectify the problems he brings along.

Summary – All I want for Christmas is to never deal with a load balancer issue again.
If you run into serious performance problems and are worried about the SEO-impact, know that Google does understand that bad things happen from time to time. As Google’s John Mueller explained, Google can slow crawling while the problems exist and then check back to see when it can return crawling to normal.

For this specific case, that’s exactly what we saw. Rankings and search visibility remained strong, but crawling slowed. Then as the performance problems were fixed, crawling returned back to normal. And all was good again in Google Land.

So, until next holiday season… here’s ho-ho-hoping you don’t run into this type of problem anytime soon. I know everyone involved with this case feels the same way. :)

GG

Filed Under: google, seo, tools, web-analytics

Searching For Buried Treasure – How To Find More Of Your Featured Snippets Using Google Search Console (GSC), Analytics Edge, And SEMrush Position Tracking

December 20, 2018 By Glenn Gabe Leave a Comment

Searching for buried SEO treasure.

A few years ago, I wrote a post about how to surface your featured snippets via SEMrush’s powerful SERP features widget. I love using SEMrush to plug in any domain and view (many of) the featured snippets it has. It’s a quick an easy way to conduct competitive analysis from a featured snippets standpoint. Remember, GSC does not provide a way to filter for featured snippets. Therefore, you must rely on third party tools to surface them for you.

Now, SEMrush’s SERP features widget is awesome, but here’s the issue. SEMrush, and other tools, don’t surface ALL of your featured snippets automatically. And that’s especially true for smaller to medium-sized sites. Unfortunately, many site owners are left in the dark with how many featured snippets they have and which queries and landing pages are yielding featured snippets.

SERP features widget in SEMrush.

It’s important to understand the featured snippets you have attained so you can analyze the content, the snippets, and try to gain more. As I’ve covered in various posts, as well as in my SMX presentations, featured snippets can drive a boatload of traffic and gain near-instant credibility in the SERPs. Position 0 is powerful.

So, with GSC not providing a way to filter by featured snippets and competitive analysis tools not surfacing all of a site’s featured snippets automatically, what’s a site owner to do?

The answer is to USE THE TOOLS TOGETHER TO UNCOVER MORE OF YOUR FEATURED SNIPPETS!

That’s what I’ll cover below. It’s quick, easy, and you can continually use this process to hunt down your latest featured snippets. Let’s begin.

How To Use GSC, Analytics Edge, And SEMrush Position Tracking To Identify Featured Snippets
SEMrush provides a ton of functionality for SEOs and site owners. For our purposes today, we’ll be using its Position Tracking functionality based on GSC data to surface more of your wonderful featured snippets. The process involves:

  • Exporting your top queries from Google Search Console for the past 28 days.
  • Filtering them by average position and click through rate.
  • Setting up position tracking for your domain in SEMrush and using the filtered queries from GSC as your keyword list.
  • Letting SEMrush position tracking do its thing!
  • Reviewing the reporting in SEMrush, which will surface any queries yielding featured snippets (along with the landing pages that are ranking for them).

Exporting Google Search Console Data
Analyzing your queries in Google Search Console is obviously extremely important. The problem is that you are limited to the top one thousand results per report. That’s severely limiting for most sites. In order to get around that limit, you can tap into the Search Console API to download ALL of your queries. There are several ways to accomplish this, but my favorite is using Analytics Edge in Excel. It’s a cost-effective solution that works incredibly well (and fast).

I’ve written several blog posts about how to use Analytics Edge, so go and check those posts out to learn how to download all of your queries. Once you have Analytics Edge installed, it should only take a minute or two to download all of your queries. You’ll end up having a spreadsheet full of glorious GSC keyword data. For example, I exported 66K queries in about thirty seconds. :)

Exporting many queries from GSC via Analytics Edge.

Next, we want to gather our list of queries to use in SEMrush for position tracking. Featured snippets in GSC will be listed in position 1. Google will provide the ranking of the actual featured snippet here and NOT the organic listing. For example, you might have the featured snippet, but your organic listing might be #4. GSC would provide #1 for the query. You can read more about how Google determines clicks and impressions in this help center document to learn how GSC provides the position for featured snippets.

Therefore, you should first filter your spreadsheet by average position and use a number filter selecting “is less than 2”. This will give us any queries where you site is ranking between 1 and 2 over the past 28 days.

Filtering by average position in Excel.

Next, layer on a second filter in your spreadsheet for click through rate. Featured snippets typically have a higher click through rate since they are given special SERP treatment and take up a lot of SERP real estate. So for the second filter, use “greater than 20%” for click through rate.

Filtering by click through rate in Excel.

Quick Tip:
You can also filter out branded queries if you want for this exercise. Many branded queries will rank highly and also have a high click through rate, but won’t yield featured snippets. If you want to save some space in SEMrush for other queries (since there is a limit for position tracking based on your package), then feel free to filter them out. You can do that by adding a “does not contain” text filter on your queries column and enter your brand.

Filtering out brand queries in Excel.

Now that you have a list of queries ranking highly (position 2 or less) that also have a high click through rate, then you are ready to hunt down featured snippets and uncover buried treasure in SEMrush. Let’s track some positions.

How To Set Up Position Tracking In SEMrush (And Find Featured Snippets)
First, you’ll want to create a project for the domain you are tracking. You can find the Projects link in the left-side navigation in SEMrush. Once you create a project, you’ll be presented with all of the different tools you can use in your project. We’re going to be focused on Position Tracking so we can identify which queries are yielding featured snippets.

Projects in SEMrush.

Next, click Position Tracking and go through the process of adding your filtered keyword list. You can start by just selecting desktop tracking in the United States, or tailor that for your own needs. I typically add desktop tracking first, and then layer on mobile. It’s very easy to do since you can just use the desktop list by clicking a button in SEMrush when setting up mobile.

Setting up position tracking in SEMrush.

Adding keywords when setting up position tracking in SEMrush.

Quick Tip:
Depending on your package in SEMrush, there’s a limit for the number of keywords you can track. For example, a Pro account can track 500 keywords, a Guru account can track 1500 keywords, etc. For this specific project, you won’t be focused on long-term tracking, so you can always swap out keywords to simply check if they are yielding featured snippets. So just make sure your initial keyword list is under the limit you have in your package. You can always swap out one list for another to uncover more featured snippets.

The Easy Part – Let SEMrush Position Tracking Do Its Thing!
Once you enter your keywords and launch position tracking, you only have a to wait a few minutes for your featured snippet data to arrive (if you have any!) Once SEMrush tracks the keywords in your list, you’ll have a boatload of data for those queries, including if they are yielding featured snippets.

There are multiple ways to find your current featured snippets in the reporting based on the keyword list you just entered. From the Overview screen, you can view the entire list of keywords along with their position, the SERP features for that query, and if your site has any of those important SERP features (like featured snippets).

For example, you’ll see a crown icon if the SERP contains a featured snippet for the query at hand. And, you’ll see a second crown next to your ranking if you have won the featured snippet.  You will also see the landing page that’s yielding the featured snippet.

Finding featured snippets in SEMrush position tracking.

You can also click the featured snippet tab to see a full breakdown of your featured snippets reporting. That will include opportunities you have to gain featured snippets from the competition, as well as when your site is already featured. Clicking the button labeled “Already featured” will filter just the keywords where your site has the featured snippet.

Viewing the featured snippets tab in SEMrush.

And clicking the position and crown enables you view a snapshot of the SERP captured by SEMrush. Awesome, right?

Viewing the SERP snapshot for featured snippets in SEMrush.

Summary – Finding featured snippet buried treasure via GSC and SEMrush
SEMrush and other tools can automatically surface some of your featured snippets, which is great. But to gain a stronger view of your featured snippet coverage, it’s smart to combine Google Search Console data with SEMrush position tracking. You might be surprised when viewing all of the featured snippets you have right now. Now go dig up that buried treasure. :)

GG

 

Filed Under: google, seo, tools

How To Find The True Size Of Your Site Using GSC’s Index Coverage Reporting (And Why It’s Important For SEO)

December 10, 2018 By Glenn Gabe Leave a Comment

How large is your site SEO-wise?

So, how large is your site? No, how large is it really??

When speaking with companies about SEO, it’s not long before I ask that important question. And I often get some confused responses based on the “really” tag at the end. That’s because many site owners go by how many pages are indexed. i.e. We have 20K, 100K, or 1M pages indexed. Well, that’s obviously important, but what lies below the surface is also important. For example, Googlebot needs to crawl and process all of your crawlable urls, and not just what’s indexed.

Site owners can make it easy, or very hard, for Google to crawl, process, and index their pages based on a number of factors. For example, if you have one thousand pages indexed, but Google needs to crawl and process 600K pages, then you might have issues to smooth over from a technical SEO perspective. And when I say “process”, I mean understand what to do with your urls after crawling. For example, Google needs to process the meta robots tag, rel canonical, understand duplicate content, soft 404s, and more.

By the way, I’ll cover the reason I said you might need to smooth things over later in this post. There are times a site has many urls that need to be processed that are being properly handled. For example, noindexing many pages that are fine for users, but shouldn’t be indexed. That could be totally ok based on what the site is trying to accomplish. But on the flipside, there may be sites using a ton of parameters, session IDs, dynamically changing urls, or redirects that can cause all sorts of issues (like creating infinite spaces). Again, I’ll cover more about this soon.

And you definitely want to make Google’s job easier, and not harder. Google’s John Mueller has mentioned this a number of times over the past several years. Actually, here’s a video of John explaining this from last week’s webmaster hangout video. John explained that you don’t want to make Google’s job harder by having it churn through many urls. Google needs to crawl, and then process, all of those urls, even when rel canonical is used properly (since Google needs to first crawl each page to see rel canonical).

Here’s the clip from John (at 4:16 in the video):

John has also explained that you should make sure Google can focus on your highest quality content versus churning through many lower quality urls. That’s why it’s important to fully understand what’s indexed on your site. Google takes all pages into account when evaluating quality, so if you find a lot of urls that shouldn’t be indexed, take action on them.

Here’s another video from John about that (at 14:54 in the video):

Crawl Budget – Most don’t need to worry about this, but…
In addition, crawl budget might also be a consideration based on the true size of your site. For example, imagine you have 73K pages indexed and believe you don’t need to worry about crawl budget too much. Remember, only very large sites with millions of pages need to worry about crawl budget. But what if your 73K page site actually contains 29.5M pages that need to be crawled and processed? If that’s the case, then you actually do need to worry about crawl budget. This is just another reason to understand the true size of your site. See the screenshot below.

Potential crawl budget issues.

How Do You Find The True Size Of Your Site?
One of the easiest ways to determine the true size of your site is to use Google’s index coverage reporting, including the incredibly important Excluded category. Note, GSC’s index coverage reporting is by property… so make sure you check each property for your site (www, non-www, http, https, etc.)

Google’s Index Coverage
I wrote a post recently about juicing up your index coverage reporting using subdirectories where I explained the power of GSC’s new reporting. It replaces the index status report in the old GSC and contains extremely actionable data. There are multiple categories for each property in GSC in the reporting, including:

  1. Errors
  2. Valid with warnings
  3. Valid and indexed
  4. Excluded

By reviewing ALL of the categories, you can get a stronger feel for true size of your site. That true size might make total sense to you, but there are other times where the true size might scare you to death (and leave you scratching your head about why there are so many urls Google is crawling).

Below, I’ll run through some quick examples of what you can find in the index coverage reporting that could be increasing the size of your site from a crawling and processing standpoint. Note, there are many reasons you could be forcing Google to crawl and process many more pages than it should and I’ll provide just some quick examples in this post. I highly recommend going through your own reporting extensively to better understand your own situation.

Remember, you should make it easier on Googlebot, not harder. And you definitely want to have Google focus on your most important pages versus churning through loads of unimportant urls.

Errors
There are a number of errors that can show up in the reporting, including server errors (500s), redirect errors, urls submitted in sitemaps that are soft 404s, have crawl issues, are being blocked by robots.txt, and more. Since this post is about increasing the crawlable size of your site, I would definitely watch out for situations where Google is surfacing and crawling urls that should never be crawled (or urls that should resolve with 200s, but don’t for some reason).

There are also several reports in this category that flag urls being submitted in xml sitemaps that don’t resolve correctly. Definitely dig in there to see what’s going on.

Errors in GSC's index coverage reporting.

Indexed and Valid
Although this report shows all urls that are properly indexed, you definitely want to dig into the report titled, “Indexed, not submitted in sitemap”. This report contains all urls that Google has indexed that weren’t submitted in xml sitemaps (so Google is coming across the urls via standard crawling without being supplied a list of urls). I’ve found this report can yield some very interesting findings.

For example, if you have 200K pages indexed, but only 4K are submitted in xml sitemaps, then what are the other 196K urls? How is Google discovering them? Are they canonical urls that you know about, and if so, should they be submitted in sitemaps? Or are they urls you didn’t even know existed, that contain low-quality content, or autogenerated content?

I’ve surfaced some nasty situations by checking this report. For example, hundreds of thousands of pages (or more) indexed that the site owner didn’t even know were being published on the site. Note, you can also fine some urls there that are fine, like pagination, but I would definitely analyze the reporting to gain a deeper understanding of all the valid and indexed urls that Google has found. Again, you should know all pages indexed on your site, especially ones that you haven’t added to xml sitemaps.

Indexed not submitted in GSC's index coverage reporting.

Excluded – The Mother Lode
I’ve mentioned the power of the Excluded category in GSC’s index coverage reporting before, and I’m not kidding. This category contains all of the urls that Google has crawled, but decided to exclude from indexing. You can often find glaring problems when digging into the various reports in this category.

I can’t cover everything you can find there, but I’ll provide a few examples of situations that could force Google to crawl and process many more urls than it should. And like John Mueller said in the videos I provided above, you should try and make it easy for Google, focus on your highest quality content, etc.

Infinite spaces
When I first fire up the index coverage reporting, I usually quickly check the total number of excluded urls. And there are times that number blows me away (especially knowing how large a site is supposed to be).

When checking the reporting, you definitely want to make sure there’s not an infinite spaces problem, which can cause an unlimited number of urls to be crawled by Google. I wrote a case study about an infinite spaces problem that was impacting up to ten million urls on a site. Yes, ten million.

Infinite spaces based on site search problems.

Sometimes a site has functionality that can create an unlimited number of urls with content that’s not unique or doesn’t provide any value. And when that happens, Google can crawl forever… By analyzing the Excluded category, you can potentially find problems like this in the various reports.

For example, you might see many urls being excluded in a specific section of the site that contain parameters used by certain functionality. The classic example is a calendar script, which if left in its raw form (and unblocked), can cause Google to crawl infinitely. You also might find new urls being created on the fly based on users conducting on-site searches (that was the problem I surfaced in my case study about infinite spaces). There are many ways infinite spaces can be created and you want to make sure that’s not a problem on your site.

Parameters, parameters, parameters
Some sites employ a large number of url parameters, when those parameters might not be needed at all. And to make the situation worse, some sites might not be using rel canonical properly to canonicalize the urls with parameters to the correct urls (which might be the urls without parameters). And as John explained in the video from earlier, even if you are using rel canonical properly, it’s not optimal to force Google to crawl and process many unnecessary urls.

Or worse, maybe each of those urls with parameters (that are producing duplicate content) mistakenly contain self-referencing canonical tags. Then you are forcing Google to figure out that the urls are duplicates and then handle those urls properly. John explained that can take a long time when many urls are involved. Again, review your own reporting. You never know what you’re going to find.

When analyzing the Excluded reporting, you can find urls that fit this situation in a number of reports. For example, “Duplicate without user-selected canonical”, “Google chose a different canonical than user”, and others.

Parameters in GSC's index coverage reporting.

Excessive redirects
There are times you might find a massive number of redirects that you weren’t even aware were on the site. If you see a large number of redirects, then analyze those urls to see where they are being triggered on the site, why the redirects are there, and determine if they should be there at all. I wouldn’t force both users and Googlebot through an excessive number of redirects, if possible. Make it easy for Google, not harder.

Redirects in GSC's index coverage reporting.

And some of those redirects might look like this… a chain of never-ending redirects (and the page never resolves). So one redirect might actually be 20 or more before the page finally fails…

Redirects chain for url that never resolves.

Malformed URLs
There have been some audits where I find an excessive number of malformed urls when reviewing the Excluded reporting. For example, a coding glitch that combines urls by accident, leaves out parts of the url, or worse. And on a large-scale site, that can cause many, many malformed urls to be crawled by Google.

Here is an example of a malformed url pattern I surfaced in a recent audit:
https://www.domain.com/blog/category/some-blog-post-title/www.domain.com/blog/category/some-blog-post-title/www.domain.com/blog/category/some-blog-post-title/www.domain.com/blog/category/some-blog-post-title/www.domain.com/blog/category/some-blog-post-title/www.domain.com/blog/category/some-blog-post-title/www.domain.com/blog/category/some-blog-post-title/www.domain.com/blog/category/some-blog-post-title/www.domain.com/blog/category/some-blog-post-title/www.domain.com/blog/category/some-blog-post-title/

Soft 404s
You might find many soft 404s on a site, which are urls that return 200s, but Google sees them as 404s. There are a number of reasons this could be happening, including having pages that should contain products or listings, but simply return “No products found”, or “No listings found”. These are thin pages without any valuable content for users, and Google is basically seeing them as 404s (Page Not Found).

Note, soft 404s are treated as hard 404s, but why have Google crawl many of them when it doesn’t need to? And if this is due to some glitch, then you could continually force Google to crawl many of these urls.

Example of a soft 404.

Like I said, The Mother Lode…
I can keep going here, but this post would be huge. Again, there are many examples of problems you can find by analyzing the Excluded category in the new index coverage reporting. These were just a few examples I have come across when helping clients. The main point I’m trying to make is that you should heavily analyze your reporting to see the true size of your site. And if you find problems that are causing many more urls to be published than should be, then you should root out those problems and make crawling and processing easier for Google.

Make it easy for Google.

Sometimes Excluded can be fine: What you don’t need to worry about…
Not everything listed in the index coverage reporting is bad. There are several categories that are fine as long as you expect that behavior. For example, you might be noindexing many urls across the site since they are fine for users traversing the site, but you don’t want them indexed. That’s totally fine.

Or, you might be blocking 60K pages via robots.txt since there’s no reason for Googlebot to crawl them. That’s totally fine as well.

And how about 404s? There are some site owners that believe that 404s can hurt them SEO-wise. That’s not true. 404s (Page Not Found) are totally normal to have on a site. And for larger-scale sites, you might have tens of thousands, hundreds of thousands, or even a million plus 404s on your site. If those urls should 404, then that’s fine.

Here’s an example of a site I helped with many 404s at any given time. They do extremely well SEO-wise, but based on the nature of their site, they 404 a lot of urls over time. Again, this is totally fine if the urls should 404:

Massive number of 404s on site doing well in organic search.

There have been several large-scale clients I’ve helped that receive over a million clicks per day from Google that have hundreds of thousands of 404s at any given time. Actually, a few of those clients sometimes have over one million 404s showing in GSC at any given time. And they are doing extremely well SEO-wise. Therefore, don’t worry about 404s if those pages should 404. Google’s John Mueller has covered this a number of times in the past.

Here’s a video of John explaining this (at 39:50 in the video):

There are other categories that might be fine too, based on how your own site works. So, dig into the reporting, identify anything out of the ordinary, and then analyze those situations to ensure that you’re making Google’s life easier, and not harder.

Index Coverage Limits And Running Crawls To Gain More Data
After reading this post, you might be excited to jump into GSC’s index coverage reporting to uncover the true size of your site. That’s great, but there’s something you need to know. Unfortunately, there’s a maximum one thousand rows per report that can be exported. So, you will be severely limited with the data you can export once you drill into a problem. Sure, you can (and should) identify patterns of issues across your site so you can tackle those problems at scale. But it still would be amazing to export all of the urls per report.

I know Google has mentioned the possibility of providing API access to the index coverage reporting, which would be amazing. Then you could export all of your data, not matter how many rows. In the short-term, you can read my post about how to add subdirectories to GSC in order to get more data in your reporting. It works well and you can read that post to learn more.

And beyond that, you can use any of the top crawling tools to launch surgical crawls into problematic areas. Or, you could just launch enterprise crawls that tackle most of the site in question. Then you can filter by problematic url type, parameter, etc. to surface more urls per category.

As I’ve mentioned many times in my posts, my three favorite crawling tools are:

  • DeepCrawl (where I’m on the customer advisory board). I’ve been using DeepCrawl for a very long time and it’s especially powerful for larger-scale crawls.
  • Screaming Frog – Another powerful tool that many in the industry use for crawling sites. Dan and his crew of amphibious SEOs do an excellent job at providing a boatload of functionality in a local crawler. I often use both DeepCrawl and Screaming Frog together. 1+1=3
  • Sitebulb – The newest kid on the block across the three, but Patrick and Gareth have created something special with Sitebulb. It’s an excellent local crawling tool that some have described as beautiful mix between DeepCrawl and Screaming Frog. You should definitely check it out. I use all three extensively.

 Summary – The importance of understanding the true size of your site.
The next time someone asks you how large your site is, try to avoid giving the quick response with simply pages indexed. If you’ve gone through the process I’ve documented in this post, the number might be much larger.

It’s definitely a nuanced answer, since Google might be churning through many additional urls based on your site structure, coding glitches, or other SEO problems. But you should have a solid grasp on what’s going on and how to address those issues if you’ve dug in heavily. I recommend analyzing GSC’s index coverage reporting today, including the powerful Excluded category. There may be some hidden treasures waiting for you. Go find them!

GG

 

Filed Under: google, seo, tools

How To Use Scroll Depth Tracking, Adjusted Bounce Rate, and Average Time On Page As A Proxy For User Engagement and Content Quality

November 28, 2018 By Glenn Gabe Leave a Comment

How to use scroll depth tracking to understand user engagement.

I was helping a company a few months ago that got hit hard by recent algorithm updates. When digging into the audit, there were a number of problems that were surfaced, including content quality problems, technical SEO problems, user experience issues, and more. From a content quality perspective, the site had an interesting situation.

Some of the content was clearly lower-quality and needed to be dealt with. But they also had a tricky issue to figure out. They target a younger audience and a lot of the content was long-form. And I mean really long-form… Some of the articles were over three thousand words in length. I asked my client if they did any type of user testing in the past to determine if their target audience enjoyed their long-form content or if they wanted shorter and tighter articles. It ends up they never ran user testing and didn’t really know their audience’s preference for content length. Like many site owners, they were pretty much guessing that this was the right approach.

The site owner said, “I wish there was a way to determine how far people were getting into the content…” That’s when I responded quickly with “you can do that!” Google Analytics is a powerful tool, but many just use the standard setup. But if you leverage Google Tag Manager, you can set up some pretty interesting things that can be extremely helpful for understanding user engagement.

That’s when I recommended using a three-pronged approach for helping identify user engagement, content consumption, etc. I told my client we could triangulate the data to help identify potential problems content-wise. And the best part is that it doesn’t take long to set up and we used one of the most ubiquitous tools on the market – Google Analytics (with the help of Google Tag Manager).

Note, there’s nothing better than running actual user testing. That’s where you can watch users interacting with your site, receive direct feedback about what they liked or didn’t like, and more. It’s extremely powerful for truly understanding user happiness. But that shouldn’t stop you from leveraging other ways to understand user engagement. What I’ll explain below can be set up today and can provide some useful data about how people are engaging with your content (or not engaging).

Triangulating The Data
The three analytics methods I recommend using to help identify problematic content include Scroll Depth Tracking, Adjusted Bounce Rate (ABR), and then Average Time On Page. I’ll go through each of them below so you can get a feel for how the three can work together to surface potential issues.

Note, there isn’t one metric (or even three) that can 100% tell you if some piece of content is problematic. It’s part art and part science. There are times you can easily surface thin or low-quality content (like thousands of pages across a site that were mistakenly published containing one or two lines of text). But then you have other times where full articles need to be boosted since they are out of date or just not relevant anymore.

Therefore, don’t fully rely on one method to do this… Also, it’s not about word count, it’s about value to the user. Google’s John Mueller has explained this several times over the past few years. Here’s a post from Barry Schwartz covering John’s comments where he explains that it’s about value versus word count. Here’s the tweet that Barry is referring to:

I agree with you & Mihai :). Word count is not indicative of quality. Some pages have a lot of words that say nothing. Some pages have very few words that are very important & relevant to queries. You know your content best (hopefully) and can decide whether it needs the details.

— John (@JohnMu) July 24, 2018

And once you identify potential issues content-wise and dig in, you can figure out the best path forward. That might be to enhance or boost that content, you might decide it should be noindexed, or you might even remove the content (404).

Scroll Depth Tracking
In October of 2017, Google Tag Manager rolled out native scroll depth tracking. And the analytics world rejoiced.

Celebrate Scroll Depth Tracking!

Using scroll depth tracking, you could track how far down each page users were going. And you have control over the triggers percentage-wise. For example, you could track if users make it 10, 25, 50, 75, and then 100 percent down the page. And then you could easily see those metrics in your Google Analytics reporting. Pretty awesome, right?

Setting up scroll depth tracking in Google Tag Manager

For my client, this alone was amazing to see. Again, they wanted to make sure their core audience was reading each long-form article. If they saw that a good percentage of users stopped 25% down the page, then that wouldn’t be optimal… And if that was the case, then my client could adjust their strategy and potentially break those articles up and craft shorter articles moving forward.

I won’t cover the step-by-step instructions for setting up scroll depth tracking since it’s been covered by many people already. Here’s a great post from Simo Ahava on how to set up scroll depth tracking via Google Tag Manager.  It doesn’t take long and you can start collecting data today.

Here is a screenshot of the tag in Google Tag Manager when I set this up. Just remember to set Non-interaction hit to true so scrolling events don’t impact bounce rate. We’ll use Adjusted Bounce Rate (ABR) to address that instead:

Using Google Tag Manager to set up scroll depth tracking.

And here are two example of scroll depth tracking in action. The first is a post where many readers are engaged and a good percentage are making their way down the page. Note, the values are events and not sessions or users. That’s important to understand. Also, the screenshots below are from two different sites and each site owner has chosen different scroll depth thresholds.:

Engaged users via scroll depth tracking.

And on the flipside, here’s a piece of content where many aren’t making their way down the page. It’s a lower quality page that isn’t seeing much engagement at all. There’s clearly much less traffic as well.

Scroll depth tracking showing unengaged users.

Adjusted Bounce Rate (ABR)
Ah, an oldie but goodie. In 2014 I wrote an article about how to set up Adjusted Bounce Rate (ABR) via Google Tag Manager. You can check out that post to learn more about the setup, but ABR is a great way to get a stronger feel for actual bounce rate. My post explains the problems with standard bounce rate, which doesn’t take time on page into account. So standard bounce rate is skewed. ABR, on the other hand, does take time on page into account and you can set whatever threshold you like based on your own content.

For example, if you write longer-form content, then you might want to set the threshold to longer (maybe a minute or longer). But if you write shorter articles, then you might want to set the ABR threshold to shorter (like 30 seconds or less). Once the time threshold is met, Google Analytics will fire an event causing the session to NOT show up as a bounce (even if the person only visits one page).

It’s not uncommon to see Bounce Rate in Google Analytics drop off a cliff once you implement ABR. And that makes complete sense. If someone visits your article and stayed on the page for six minutes, then that shouldn’t really count as a bounce (even if they leave the page without visiting any other pages). The person was definitely engaged. Here’s what that drop looked like for a client that implemented adjusted bounce rate this summer:

Bounce rate drops when adjusted bounce rate is implemented.

Here is an example of two highly-engaged posts of mine about major algorithm updates. The adjusted bounce rate is just 12% for one and 13% for the other. Many people visiting these pages spend a lot of time reading them. So even if that’s the only page they read, it shouldn’t be counted as a bounce.

Adjusted Bounce Rate example

So, now we have two out of three metrics set up for helping gauge user happiness. Next, I’ll cover the third, which is Average Time On Page (a standard metric in Google Analytics). When combining all three, you can better understand if visitors are staying on a page for a certain amount of time, how far they are scrolling down the page, and then how long they are staying overall on that page.

Average Time On Page
I find there’s a lot of confusion about time metrics in Google Analytics. For example, why Average Time On Page might be one time, and then Average Session Length is shorter than that. How can that be? Well, Mike Sullivan from Analytics Edge wrote a post about this a while ago. I recommend reading that article to get a feel for how the metrics work. In a nutshell, Average Time On Page excludes bounces (one-page visits). That’s because Google Analytics needs a page hop to calculate how long somebody remained on a page. Remember, we are setting up scroll depth tracking to be a non-interaction hit, so it won’t impact bounce rate or time metrics

Therefore, Average Time On Page will tell you how long users are staying on a piece of content when they visit another page on the site. Sure, it excludes bounces (so it’s not perfect), but it’s still smart to understand time on page for users that click through to another page on the site.

As an example, here’s my post about the August 1, 2018 algorithm update. The Average Time On Page is 12:48 for the responsive page and 17:52 for the amp version. In web time, that’s an eternity.

Average time on page high for engaged posts.

And on the flip side, here’s a page from a different site that needs some help. It hasn’t been updated in a while and users are identifying that pretty darn quickly. Even for people that click through to another page, the Average Time On Page is just 0:55. That’s a major red flag for the site owner.

Low avg time on page.

Some final tips:
Now that I’ve covered three methods for better understanding user happiness and engagement, I wanted to cover some final tips. Once you are collecting data, you can slice and dice the information in Google Analytics several ways.

  • First, review all three metrics for each piece of content you are analyzing. If you find high adjusted bounce rate, low scroll depth, and low average time on page, then there’s clearly an issue. Dig in to find out what’s going on. Is the content old, is there a relevancy problem based on query, etc.?
  • You might find content where scroll depth looks strong (people are scrolling all the way down the page), but adjusted bounce rate is high. That could mean people are quickly visiting the page, scrolling down to scan what’s there, and then leaving before your ABR time threshold is met. That could signal a big relevancy problem.
  • You can use segments to isolate organic search traffic to see how users from Google organic are engaging with your content. Then you can compare that to other traffic sources if you want.
  • You can also segment mobile users and view that data against desktop. There may be some interesting findings there from a mobile perspective.
  • Heck, you could even create very specific segments to understand how each one is engaging with your content. For example, you could create a segment of female visitors ages 18-34 and compare that to male users. Segments in Google Analytics can be extremely powerful. I recommend reviewing that topic in detail (beyond just what I’m covering today with scroll depth tracking, ABR, etc.)

Summary – Using Analytics As A Proxy For User Engagement, User Happiness, and Content Quality
I always recommend conducting user studies to truly find out what your target audience thinks about your content. You can watch how they engage with your content while also receiving direct feedback about what they liked or didn’t like as they browse your site. But you can also use analytics as a proxy for engagement and user happiness (which can help you identify content quality problems or relevancy issues).

By combining the three methods listed above, you can better understand how users are engaging with your content. You’ll know if they are staying for a certain amount of time, how far they are scrolling down each page, and then you’ll see average time on page (excluding bounces). It’s not perfect, but it’s better than guessing.

And once you collect the data, you may very well choose to refine your content strategy. And the beautiful part is that you can start collecting data today. So go ahead and set up scroll depth tracking and adjusted bounce rate. Then combine that with average time on page so you can use the three-pronged approach I covered in this post. Leverage the power of Google Analytics and Google Tag Manager (GTM). You never know what you’re going to find.

GG

 

Filed Under: google, google-analytics, seo, tools, web-analytics

  • 1
  • 2
  • 3
  • …
  • 26
  • Next Page »

Connect with Glenn Gabe today!

G-Squared Interactive LLC is a Google AdWords Certified Partner.
Glenn Gabe of G-Squared Interactive LLC is a Bing Ads Accredited Professional.

Latest Blog Posts

  • 7 Examples of Untrackable Clicks From Google’s Ecosystem of Search, Chrome, and Feeds
  • A Holly Jolly Load Balancer Christmas – How Google Treated A Major Site Performance Problem From A Crawling And Ranking Perspective [SEO Case Study]
  • Searching For Buried Treasure – How To Find More Of Your Featured Snippets Using Google Search Console (GSC), Analytics Edge, And SEMrush Position Tracking
  • How To Find The True Size Of Your Site Using GSC’s Index Coverage Reporting (And Why It’s Important For SEO)
  • How To Use Scroll Depth Tracking, Adjusted Bounce Rate, and Average Time On Page As A Proxy For User Engagement and Content Quality
  • Night of the Living 302s: How SEO Crawlers and GSC’s Index Coverage Reporting Helped Me Surface A Sinister 302 Redirect Problem [Case Study]
  • The September 27, 2018 Google Algorithm Update And October 4 Tremor – Google Experiments, Relevance, Trust Signals, Reversals, and “Staying in your lane”
  • The Magically Moving Meta Robots Tag And The Potential SEO Danger It Brings [Case Study]
  • Meet Newsguard, A Team Of Quality Raters For News Publishers – And Another Way To Check Site Trust, Credibility, and Transparency
  • Analysis and Findings From The August 1, 2018 Google Algorithm Update – A Massive Core Ranking Update

Archives

  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • GSQi Home
  • About Glenn Gabe
  • Digital Marketing Services
  • Blog
  • Contact GSQi
Copyright © 2019 G-Squared Interactive LLC. All Rights Reserved. | Privacy Policy