The Internet Marketing Driver

  • GSQi Home
  • About Glenn Gabe
  • SEO Services
    • Algorithm Update Recovery
    • Technical SEO Audits
    • Website Redesigns and Site Migrations
    • SEO Training
  • Blog
  • Contact GSQi

Exit The Black Hole Of Web Story Tracking – How To Track User Progress In Web Stories Via Event Tracking In Google Analytics

November 2, 2020 By Glenn Gabe Leave a Comment

How to track user progress in Web Stories via event tracking in Google Analytics.

Google’s Web Stories, previously called AMP Stories, can provide an immersive AMP experience across both desktop and mobile. Google has been pushing them hard recently and stories can rank in Search, Google Images, and in Google Discover. On that front, Google recently rolled out a Web Story carousel in Discover, which can definitely attract a lot of eyeballs in the Discover feed. And those eyeballs can translate into a lot of traffic for publishers.

I’ve covered Web Stories heavily over the past year or so and I’ve written a blog post covering a number of tips for building your own stories. I have also developed several of my own stories covering Google’s Disqus indexing bug and the upcoming Page Experience Signal.

Building those stories by hand was a great way to learn the ins and outs of developing a story, understanding the functionality available to creators, the limitations of stories, and how to best go through the life cycle of developing a story. As I explained in my post covering various tips, it’s definitely a process. Planning, creativity, and some technical know-how go a long way in developing an engaging and powerful Web Story.

From a feedback perspective, analytics can help creators understand how well their story is being received, if users are engaged, and how far they are progressing through a story. Unfortunately, that has been challenging to understand and accomplish for many publishers starting off with Web Stories. And that situation has led me to research a better way to track stories via Google Analytics. That’s what I’ll be covering in this post. By the end, you’ll be tracking Web Stories in a more powerful and granular way. I think you’ll dig it.

Analytics for Web Stories – Confusing For Many Creators
From the start, it seemed like analytics took a back seat for stories. There wasn’t great documentation about how to add analytics tracking and the WordPress plugin originally didn’t even have the option for including tracking codes. That changed recently, which was great to see, but questions still remained about how to best track Web Stories. For example, can you use Google Tag Manager, can you add advanced tracking to understand more about how users are engaging with your story, can you track specific elements in your story, etc.?

Basic page-level tracking in Web Stories.

After looking at basic metrics for my stories in Google Analytics (yawn), I went on a mission to enhance my story tracking setup. Unfortunately, there’s still not a one-stop resource from Google for tracking Web Stories (hint-hint Paul Bakaus), but I was able to dig into various documents and articles and figure out a pretty cool solution that’s easy to set up. I’ll provide that setup below so you can start tracking your own stories in a more powerful and granular way.

Tracking User Progress Through A Web Story: A Simple Goal
If you just add a basic tracking code to your story, you will at least know how many people are viewing the story and gain basic metrics for the page (just like any other page in Google Analytics). But that doesn’t really do Web Stories justice…

Web Stories are a unique combination of pages within a story. In other words, Web Stories string together multiple pages, which make up the larger story. Users can click back and forth to view each page within the story. You can also automatically advance the user to the next page after a specific amount of time. And once a user completes a story, they are presented with a “bookend”, which is a final page that contains information selected by the creator.

With a basic tracking setup, Web Stories are like a black hole. People enter, and you have no idea what’s going on within the story. For example, how many pages have they viewed, how far are users progressing through the story, did they reach the bookend, how long did it take to get to the end, etc.?

Wouldn’t it be awesome to be able to track that information??

The good news is that you can, and it’s pretty easy to set up. Below, I’ll cover how to add the necessary tracking to your Web Stories so you can gain more information about how users are engaging with your stories. And beyond just setting up this level of tracking, I wanted to provide more information about how events and triggers work in stories so you can start testing your own advanced tracking setup. Let’s jump in.

Web Story Tracking: A Top-level View of What We Are Trying To Accomplish
Before I cover the tracking framework you can utilize today in order to better track your Web Stories, let’s cover the basic bullet points of what we are trying to achieve. Here is what we want to achieve:

  • Track user progress through each Web Story you have published. i.e. Track each page within the story to understand how far users are progressing.
  • Document the Web Story title and organize each page within the story so they can be tracked properly.
  • Track when users reach the final page in your Web Story so you can identify how many users actually reach the end.
  • Track when users enter your bookend, which is a special page at the end of your Web Story that contains social sharing and related links. It’s just another way to understand when users have reached the final part of your story.

For example, wouldn’t it be incredible to see the following? That’s a sample Web Story and data for each page within the story. Yep, this is what we want… let’s go get it:

Event reporting in Google Analytics for Web Stories.

The Inner Workings: Events, Triggers, and Variables
Every Web Story issues events as a user progresses through a story. For example, when a user progresses from one page to another within a story, the “story-page-visible” trigger fires every time a new page is loaded. You can capture an event like that and report it in Google Analytics using event tracking.

When sending those events to Google Analytics from within your story, you can provide the typical event parameters like event_action, event_category, and event_label so you can track your Web Story data in your GA reporting.

Event tracking in Google Analytics.

List of Web Story Triggers and Variables:
There are several triggers you can capture and a list of them can found on github for AMP Project/AMP HTML. In addition, you can view the variables available to you via AMP by checking out the AMP HTML Variable Substitutions page. Between the two documents, you can learn how to combine triggers and variables to set up advanced tracking.

Web Story triggers.
AMP variables.

A Template From The AMP Project!
During my research, I was excited to see that the AMP blog published a post about tracking Web Stories and it contained a skeleton structure for advanced tracking! For example, the post listed a code snippet for firing an event every time a user progresses from one page to another within a web story (to track user progress). We could use this snippet, and expand on it, to customize our tracking.

Here is an image from the AMP project’s blog post about tracking user progress. Again, this is exactly what we are looking to do.

Analytics setup for web stories.

By using the right triggers and variables and then firing events from our Web Story, we can get a much stronger picture of user engagement. Below, I’ll provide the triggers and events we’ll use and then I’ll provide the final code later in the post.

Note, there are three triggers we’ll be capturing in our Web Story and we’ll fire an event when those triggers are captured.

  • Trigger: story-page-visible. When each new page in the story loads, story-page-visible fires. When that fires, you will send an event to Google Analytics with the following variables.
  • event_name: You can name this whatever you want. Event tracking in Google Analytics focuses on the following three fields.
  • event_action: Name this something descriptive. For this example, we’ll use “story-progress” which is what the original blog post used covering this approach.
  • event_category: For this field, I’m going to use a variable for Web Stories, which is the title of Web Story. The variable is ${title}, which is what’s present in your title tag. I linked to the variables available to you earlier in this post.
  • event_label: For the final field, we’ll use both the page index value (page number) and ID for the page within the Web Story (which is the descriptive name for the page you provide in your story). This will enable us to see how many times a specific page within the Web Story is loaded by users. The variables are ${storyPageIndex} and ${storyPageID} and you can combine the two in your code. I added “Page: ${storyPageIndex} ID: ${storyPageID}” to combine both in your event reporting. It makes it easier to see the page number and then the ID associated with that page. BTW, thank you to Bernie Torras who pinged me on Twitter about storyPageIndex, which is a great way to capture the page number within your story.

Next, we want to know when users visit the final page of each Web Story. That can help us understand how many people are actually reaching the end. To accomplish that, we can add another trigger:

  • Trigger: story-last-page-visible. Note, this is not the bookend. Instead, this is the last page in your story before the bookend is displayed. Story-last-page-visible fires when a user reaches that final page in your story before the bookend.
  • event_name: You can name this whatever you want. Just like earlier, the reporting in Google Analytics focuses on the following fields.
  • event_action: Name this something descriptive. For this example, we’ll use “story-complete” since the original blog post covering this tracking framework used that action name.
  • event_category: Make sure to use the same event_category for this trigger as you did earlier to keep the various triggers organized by Web Story. The variable is ${title}. Then you can drill into a specific story in Google Analytics and view the actions and labels associated with that one story.  

And finally, let’s add one more trigger to understand when users reach the bookend in your Web Story, which is a special page at the end that contains social sharing and related links. It’s just another way to understand that users made it to the very end of your Web Story. You’ll need to add one more section of code to your tracking script:

  • Trigger: story-bookend-enter
  • event_name: You can name this whatever you want. As I mentioned earlier, the reporting in Google Analytics focuses on the following fields.
  • event_action: You can also name this whatever you want. For this example, let’s use story-bookend-enter.
  • event_category: Like earlier, I’m going to use a variable for Web Stories, which is the title of Web Story. The variable is ${title}. Remember to keep the category consistent with each trigger so you can view all events within a single web story in your reporting.

By adding this setup, the event tracking reporting in Google Analytics will enable you to drill into specific stories, see the number of “pageviews” for each page within a story, know how many users are reaching the final page in a story, and then how many are viewing the story bookend. It’s a much stronger setup than just seeing a single pageview for your Web Story (AKA, the black hole of Web Stories).

Here is the final code based on what I mapped out above. Make sure you replace the placeholder GA account ID with your own:

<amp-analytics type="gtag" data-credentials="include">
  <script type="application/json">
	{
	  "vars": {
		"gtag_id": "UA-XXXXXX-X",
		"config": {
		  "UA-XXXXXX-X": {
			"groups": "default"
		  }
		}
	  },
	  "triggers": {
		"storyProgress": {
		  "on": "story-page-visible",
		  "vars": {
			"event_name": "custom",
			"event_action": "story_progress",
			"event_category": "${title}",
			"event_label": "Page: ${storyPageIndex} ID: ${storyPageId}",
			"send_to": ["UA-XXXXXX-X"]
		  }
		},
		"storyEnd": {
		  "on": "story-last-page-visible",
		  "vars": {
			"event_name": "custom",
			"event_action": "story_complete",
			"event_category": "${title}",
			"event_label": "${totalEngagedTime}",
			"send_to": ["UA-XXXXXX-X"]
		  }
		},
		"storyBookendStart": {
		  "on": "story-bookend-enter",
		  "vars": {
			"event_name": "custom",
			"event_action": "story_bookend_enter",
			"event_category": "${title}",
			"send_to": ["UA-XXXXXX-X"]
		  }
		}
	  }
	}
  </script>
</amp-analytics>

How To Add Your GA Tracking Script To A Web Story:
Once you have your tracking script ready, you need to add that to your Web Story code. I’ve been hand-coding my stories so it’s easy to have control over where the amp analytics tag is placed. In my stories, I place the amp analytics tag after the final page in my story, but before the bookend tag and the closing <amp-story> tag. If you place the amp analytics tag outside of your <amp-story> tag, the Web Story will not be valid. You can see the placement of my amp analytics tag in the screenshot below.

Amp analytics placement in Web Story code.

Make sure your amp-analytics tag is placed before the closing amp-story tag.

Placing amp analytics tag before bookend and closing amp story tag.

A Note About The Web Stories WordPress Plugin:
Again, I have been hand-coding my stories and haven’t dug too much into the WordPress plugin. I’ve heard good things about it, but I really wanted to learn the ins and outs of building a story, so I stuck with hand-coding my stories.

The WordPress plugin finally added the ability to easily include your Google Analytics tracking ID, but it doesn’t look like you can add advanced-level tracking easily (like what I’m mapping out in this post). I’ll reach out to the Web Story team to see if they will add the ability to accomplish this in the future, but for now I think you’ll be limited to the basic tracking I mentioned earlier.

{Update: WordPress Plugin Automatically Firing Events}
I have very good news for you if you are using the Web Stories WordPress plugin. Brodie Clark pinged me today after going through my post. He is using the Web Stories plugin, checked the Events reporting in Google Analytics, and noticed the plugin is automatically firing those events! That’s amazing news for any plugin users!

Again, I’ve been hand-coding my stories so I haven’t played around too much with the plugin. But that’s outstanding news, since plugin users can view user progress and a host of other events being fired within their stories.

Once you add your GA tracking ID, it seems the plugin is automatically capturing the data and firing events:

Adding a Google Analytics tracking ID to the WordPress Web Story plugin.

Here are the triggers being captured based on what Brodie sent me:

WordPress Web Story Plugin automatically firing events.

And here is what it looks like once you click into story_progress. The plugin is using storyPageIndex versus storyPageID so you can see the page number in the reporting. I’m thinking about combining the two actually.

Tracking user progress through Web Stories via the WordPress plugin.

How To Test Your Tracking Via Google Analytics Real-time Reporting
The easiest way to test your new tracking setup is to upload your story to your site and view real-time reporting in Google Analytics. There’s a tab for Events, where you can see all of the events being triggered. Just visit your story and look for the various events, actions, and labels.

Viewing real-time reporting in Google Analytics for Web Story events.

Viewing Web Story Tracking In Google Analytics:
Once your story is live that contains your new tracking setup, and users are going through your story, you can check the Events reporting within the Behavior tab in Google Analytics to view all of the events that have been captured. This is where the naming convention we used comes in handy. When you click “Top Events”, you will see the event categories listed. We used the story title as the category, so you will see a list of all stories where events were captured.

When you click into a story, you will see each action that was captured (each trigger that was captured as an event).

Viewing Web Story triggers in the Events reporting in Google Analytics.

And by clicking an action, you can see the labels associated with that action. For example, story_progress enables you to see the list of pages that users viewed within your story and how many events were triggered for each page (helping you understand how far users are progressing through each story).

Viewing user progress through a Web Story in Google Analytics.

And there you have it! You can now track your Web Stories in a more powerful and granular way. It’s not perfect, but much stronger than the “black hole” approach of just basic page-level metrics. And remember, you can totally expand on this setup by adding more triggers and using the variables available to you.

Summary – Creep out of the black hole of Web Story tracking.
I hope you are excited to add stronger tracking to your Web Stories. As I documented in this post, you can creep out of the black hole of story tracking and analyze user progress through your story. By doing so, you can better understand user engagement and then refine your stories to increase engagement and user happiness.

So don’t settle for black hole reporting. With a basic level of understanding of event tracking, triggers, and variables, you can set up some very interesting tracking scenarios. Good luck.

GG

Filed Under: google, google-analytics, seo, tools, web-analytics

Web Stories Powered by AMP – 12 Tips and Recommendations For Creating Your First Story

July 15, 2020 By Glenn Gabe Leave a Comment

I’ve been following the progression of Web Stories (formerly called AMP stories) since 2018 when the developer preview launched, and they have been hard to ignore recently. Google has been pushing the Story format pretty hard over the past year and I’ve seen stories show up more and more both in Search and in Discover.

If you aren’t familiar with Web Stories, they are immersive AMP experiences that let publishers cover a topic over multiple pages or screens. It’s similar to the stories format you’re familiar with across social networks like Instagram, Snapchat, and Facebook.

Here are screenshots of the Web Story I created. The first image is the story on desktop and the second image shows the story on mobile:

From a visibility standpoint, Web Stories can rank just like any other web pages and you will also get a nifty story icon in the mobile search results. I’ll cover more about the ranking effect of stories later in the post.

Jumping in and building my first Web Story, powered by AMP:
Based on Google’s recent focus on Web Stories, I decided to learn more about the format and publish one of my own. So, last week I chose a topic I had researched heavily (Google’s Disqus indexing bug) and dove in head-first. I published that story and shared it across social media accounts. It definitely piqued the curiosity of many SEOs and ended up driving quite a bit of traffic (it still is actually). More about that soon.

If you’re interested in seeing what a Web Story looks like now, you can check out my story about the Disqus indexing problem. If you’re already familiar with how Web Stories work, then you can continue below.

After developing and publishing the story, there were some important points, tips, and recommendations I thought could help others looking to get involved with Web Stories. So I compiled a list of 12 tips and recommendations in this post.

The information below spans a number of topics from creative strategy to technical execution to tracking to Search visibility. I do believe we’ll start seeing more and more Web Stories across surfaces and I hope my tips are helpful as you start to explore stories for your own projects!

Web Stories: 12 Tips for building your first story.

1. Story Creation: It’s a process…
When I sat down to map out my first Web Story, it was clear it wouldn’t be a quick process. It reminded me of developing multimedia applications, or a video production, since there was a lot involved during the planning stages. For example, storyboarding the various screens, mapping out visual assets that would be needed, determining audio and video components that would support the story, and more.

I love working on that type of project (based on heavily working in that niche earlier in my career), but it’s important to understand that building a Web Story is not a ten-minute project. It’s a process that will take some time if you’re going to do it right.

Planning is critically important or you can end up spending enormous amounts of time on areas that aren’t impactful.  Topic-wise, my recommendation is to cover a very specific topic with each story (and one you are extremely familiar with).

2. Storyboard First: Don’t skip this critically important step.
I mentioned earlier that a storyboard was necessary. If you haven’t storyboarded a video or multimedia application, it’s essentially a plan of what will be covered step-by-step (or page by page for Web Stories). That includes the copy, visuals, animation, audio, and video for each page in the story.

Once you map out a solid storyboard, you can start designing and developing based on that plan. Having a strong storyboard can make it much easier to execute than “riffing” through each page (designing, writing, and building on the fly).

When building a storyboard, you can sketch out the screens by hand or use software to help you map out each page in your story. You can even use PowerPoint or Google Slides to storyboard your Web Story if you want. It’s not about the technology, it’s more about having each page planned before digging in. I find sketching out the screens works well for me.

Here is what a storyboard looks like. And note, my sketches are definitely not as good as what you are seeing in the photo! It’s more about mapping out your production and not being an amazing artist. :)

BTW, Masterclass has a blog post featuring Jodi Foster that covers storyboards (for video). You can check that out for more information and tips about creating them.

3. Photoshop is your friend. No… it’s your core weapon in the game of Web Stories.
The visual nature of Web Stories means you’ll need killer visuals. And that typically means you’ll be working in Photoshop heavily (or some type of image editing application). While developing my story, I found myself combining, masking, layering, and adjusting images to get the right visuals.

In Web Stories, you can use background images and then layer other images (with transparency if needed). You can also use animated gifs to provide more effect. Regardless, you’ll be working with images heavily.

The core point is that you should be comfortable editing images in an application like Photoshop. It will be your best friend throughout the process of building your story. And it can make a huge difference in the impact that your story has with users.

4. Multimedia for supporting your Web Story, including video, audio, and animation.
Web Stories provide an immersive AMP experience and support video, audio, and animation. Multimedia elements like this can add a powerful component to your stories that enable users to experience your topic on another level. You can check the AMP documentation for the various AMP components that are supported.

But, there’s a fine balance between providing supporting multimedia components and overwhelming your users with unnecessary distractions. I recommend testing various multimedia assets locally and in your staging environment. Have other people go through your story while you’re developing it. You might find certain elements are over-the-top, while others really pack an exciting punch.

Remember, multimedia should support your story and not distract users from the story. In my first story, I added a subtle video of typing a comment in Disqus (that questions whether the comment will be indexed):

5. Know Your (HTML) Tags
Web Stories (like other AMP urls) support a subset of html tags. This is important to understand as you plan and build your story. You can review those tags, and more, in the AMP html specification in the developer documentation. Also, some html tags should be replaced with specific amp tags (like image, video, audio, etc.)

Also, you can style your Web Story elements via CSS (up to a 75K limit for your CSS, whether the styles are provided inline or via linked stylesheets).

And from a linking standpoint, I’ve received a lot of questions about adding links to external content from within a story. Yes, you can definitely do that and I included two links in my first story to supporting content.

5a. Deep-linking to specific pages in your Web Story is possible.
Since a Web Story is a collection of pages, there are times you might want to drive users to a specific page in that story. In other words, deep link users to page 5 in your story. Saijo George just pinged me about finding deep linking in the documentation. It’s called Branching and enables you to add a hash to a url and drive users to a specific page based on that page’s ID.

It’s very easy to implement and uses the following format:
https://www.domain.com/story-url.html#page=<page-id>
Just replace <page-id> with the actual page ID you are using in your Web Story and the link will drive users directly to that deeper page.

For example, here is a link that will take you to a deeper page in my Web Story about the Page Experience Signal (the page about speed).

Deep linking in Web Stories.

6. Story Execution: Code first, use tools later.
There are several story creation tools on the market, but I recommend hand coding your first Web Story. By coding your story, you can learn a lot about how Web Stories work, the code running each story, how the setup works, and more. This will also help you debug stories in the future that aren’t behaving correctly in production.

After you learn more about how Stories are actually coded, you might check out various story builders. I haven’t tried many third-party tools for Web Stories yet, but there is an official WordPress plugin in beta that looks pretty cool. The AMP team announced that a few weeks ago. If you’re brave, you can install it now and play around. That said, I still recommending hand coding your first Web Story! :)

7. Web Stories Are Device Agnostic: They work on both desktop and mobile.
Since Stories are AMP-based, many people think they are just for mobile. That’s not true. Web Stories run on desktop as well! You can upload Stories to your server just like any other webpage and they can be accessed directly by users and by Google.

You will definitely want to test out your Web Story across both desktop and mobile to make sure everything looks good and works properly. You can’t guarantee that users will just be arriving via mobile, so thoroughly test your story across browsers, operating systems, and devices.

Here is a screenshot of my Disqus Web Story running on desktop:

8. Speed: Web Stories are served from the AMP cache.
Since Stories are AMP-based, you can leverage the AMP cache for lightning-fast delivery. As long as they are valid AMP urls, Google can add your Story to the AMP cache. Once it’s cached, that version of your story will be served via the AMP cache in the mobile search results.

Google’s AMP cache enables your AMP urls and Stories to load near-instantly (an important benefit of running AMP).

9. Tracking Stories: Don’t forget analytics!
You will definitely want to track your Web Stories, so make sure to include the necessary tracking scripts. To start, I would focus on just getting basic Google Analytics tracking implemented and then you can expand from there (to include advanced tracking throughout your Web Story).

For example, I wrote a post explaining how to track user progress through your Web Stories, how to know if users are completing the story, etc. That post covers both hand-coding stories and using the Web Stories WordPress plugin.

Note, I definitely ran into some initial challenges with getting GA tracking to work, but ultimately did get it to work… Also, I used Google Tag Manager and it was a little tricky to set up. But again, it is tracking well now (at least basic tracking). I plan to expand that soon.

10. Rankings: Can Web Stories rank? Yes, they can, and across surfaces.
Over the past year, I’ve seen Web Stories show up more and more in the search results. Also, Google can display a Visual Stories carousel in the search results. So yes, Web Stories can rank.

And when your stories rank, your listing receives an image thumbnail in the mobile SERPs, along with a nifty Web Story icon. Both can help your story gain attention and can help drive stronger click through rates from the mobile SERPs.

Here is my web story ranking in the desktop search results and you can check previous screenshots in this post to see it ranking on mobile.

In Google Search Console (GSC), you can track stories just like any other urls in the performance reporting. You can see queries that users searched for that yielded your story in the SERPs, and you can see impressions, clicks, position, and click through rate. Also, there is an AMP Story filter in GSC where you can isolate your Web Stories.

Also, Web Stories can show up in Google Discover and there is dedicated reporting for Discover in GSC. That’s a good segue to my next tip…

11. Web Stories in Google Discover: A hidden weapon for publishers (at least for now).
Over the past year, I have noticed more Web Stories showing up in Google Discover. Actually, I shared that on Twitter several times and explained it was a huge opportunity for publishers that are early adopters of the format. Here’s my tweet from February after seeing Web Stories show up in Discover:

Interested in Google Discover? Here's an opportunity that many aren't thinking about. I just saw an AMP story in my Discover feed. They can take up a lot of real estate… Def. caught my attention. It didn't show the typical Stories icon for some reason. Worth checking out: pic.twitter.com/lcABOEgLAr

— Glenn Gabe (@glenngabe) February 20, 2020

And here is what a Web Story looks like in the Discover feed. Notice the larger image in the feed (with a 3×4 aspect ratio):

Well, after publishing my own story, I think I was right. My Web Story has gained over 304K impressions since being published last week. So, Google is pushing out stories hard in the Discover feed (for sites that have earned their spot in the feed).

Although the story has driven over 800 clicks from Discover in the last week, the click through rate has been relatively low (compared to Discover stats across publishers I’m helping). I’m not sure if that’s the narrow focus of the story, how it looks in the feed, where it’s placed in the feed, etc. That’s tough to figure out since you can’t see exactly how others are seeing your story in their own Discover feed. For example, the story could be in a carousel of stories, stand-alone in the feed (which can yield larger images), it could be ranking lower in their feed, etc.

I’ll be testing out more stories soon to see how they perform over time. Regardless, if you’re a publisher looking to gain more Discover visibility, you should definitely check out Web Stories.

Note, Google just updated its help document for Google Discover and provided more information about how content appears in the feed. Kenichi Suzuki found that the other day and tweeted the update.

Google's Discover documentation now mentions E-A-T.

As Kenichi pointed out, Google now mentions expertise, authoritativeness, and trust (E-A-T) in that document. For example, “Our automated systems surface content in Discover from sites that have many individual pages that demonstrate expertise, authoritativeness and trustworthiness (E-A-T).” That’s important to understand for Discover (and for Search in general).

So again, if you’re eligible to show in Discover, and you have earned your spot in someone’s feed, then Web Stories can provide a powerful Discover listing. And it seems Google is pushing more stories in the Discover feed based on what I’ve seen over the past year.

In addition, Google recently began testing a Web Stories carousel in Discover. I haven’t seen that yet in my feed, but it shows that Google is experimenting heavily with stories in Discover. This is all important for publishers looking to gain more visibility.

12.  Web Story Content Strategy
I could write an entire post on this topic, but I wanted to quickly bring it up now. When planning your Web Stories, it’s important to figure out your content strategy per story. There are several paths you can take. For example, will your Stories be stand-alone projects, will they be combined with another post, will you repurpose other articles into stories, will you be covering events as Web Stories, etc.?

On that note, Web Stories support live stories, which enable you to cover live events. It’s just an example of the various ways that Web Stories can be utilized.

The sky’s the limit creativity-wise, but that can also be overwhelming. I recommend testing out various approaches and see what works best for your situation. Experimentation is key and you might be surprised with what works best.

Summary: Web Stories, a new immersive format powered by AMP
If you are interested in testing the waters with Web Stories, then I hope the various tips and recommendations I provided were helpful. Again, Google seems to be pushing Web Stories hard and they can rank well in both Search and Discover.

In my opinion, it’s definitely worth testing out stories for your own site. But as I explained earlier, it’s not a ten-minute project… it’s definitely a process. I recommend taking a focused approach to building your first Web Story. Map it out, storyboard it, and hand code it. You never know, when it’s all said and done, you might have a good story to tell. :)

GG

Filed Under: google, seo, tools, web-analytics

Visualizing The SEO Engagement Trap – How To Use Behavior Flow In Google Analytics To View User Frustration [Case Study]

June 15, 2020 By Glenn Gabe Leave a Comment

When thinking about content quality, it’s incredibly important to meet or exceed user expectations based on query. If your content cannot do that, then users can end up extremely frustrated as they search Google, click through your listing in the search results, scan your page, and not find what they were looking for. And that can lead to all sorts of problems SEO-wise.

When you don’t meet or exceed user expectations, one possible result is a user jumping back to the search results to find another solution (low dwell time). That’s clearly not what you want, but beyond the jump back to the SERPs, there are also times users stay on the site and keep digging believing they can ultimately find what they were looking for. That’s a classic engagement trap where users can end up running around a site visiting page after page without finding what they were looking for.

From a broad core algorithm update standpoint, Google is not going to react well over time when sites frustrate users and cannot provide answers that meet or exceed user expectations. We’ve seen that time and time again when broad core ranking updates roll out. In addition, I’ve seen this for a very long time with major algorithm updates overall, including during medieval Panda days.

As I’ve said many times before while writing about major algorithm updates, “Hell hath no fury like a user scorned”. In other words, don’t frustrate your users or bad things can happen during major Google algorithm updates.

This sounds logical, but how you can show a site owner user frustration like I explained above? For example, real data showing that users are frustrated versus just providing your opinion. Well, I surfaced a great example recently and I’m going to share more about that below. It’s something you can do for your own site and you can accomplish this using a tool that’s freely available and right under your nose – Google Analytics.

Backing Up SEO Intuition With Data – Visualizing User Frustration
I’m helping a site now that has been negatively impacted by two broad core updates in a row. I started helping them in April after they reached out to me about being hit by the January 2020 Core Update. Then as I was auditing the site, and they were making changes, the site was hit a second time during the May 2020 Core Update.

Remember, recent changes will typically not be taken into account with broad core updates and the site was just in the beginning stages of implementing changes. You can read my latest post about the May 2020 Core Update to learn more about that point (and other important points that site owners should know about Google’s broad core updates).  

During the audit, I surfaced a number of key problems and the site owner has been moving as quickly as possible to implement those changes. One of the issues I surfaced involved user frustration and not meeting expectations based on query.

The site had a number of pages that were ranking well for queries where users had a very specific intent (and I would go as far as saying it was an extremely strong intent). Unfortunately, the pages could not help users achieve that goal. They tangentially covered the topic, but couldn’t provide exactly what users were looking for.

And beyond that, the pages contained information and links that might lead some users to believe they could still find a solution deeper on the site. This could lead to the engagement trap situation I mentioned earlier where users end up digging and digging with no possible way to find gold.

To show the impact during a broad core update, here are two pages like this on the site and their respective drops during the January 2020 core update:

As I was auditing the site, I began to experience user frustration like I mentioned earlier… I was running in circles when I put myself in the shoes of a typical user. It was extremely frustrating… and although I could see many clicks leading to these pages, GSC alone couldn’t provide the full story. So I switched to Google Analytics and a feature that I believe is completely underutilized – Behavior Flow.

Behavior flow enables you to visualize the journey users take through your site based on a dimension you select as a starting point. For example, you can view the path through your site based on traffic source, medium, campaign, landing page, and more. I bolded landing page since that’s exactly what I’m going to show you below for the page type I covered earlier.

Here is what Behavior Flow reporting looks like in GA:

A quick note about sampling:
It’s important to know that Flow Visualization reports in Google Analytics are sampled. Behavior Flow is based on 100K sessions based on the selected timeframe. To get around sampling, you can always shorten the time range to get the number of sessions under the sampling threshold. Just keep this in mind while reviewing the reporting.

When “Engagement” Could Mean Frustration: How To Use Behavior Flow To Visualize Frustrated Users
During the engagement, I ran a traffic drop report (previously called Panda report) to surface the largest drops by landing page after the January and May core updates. Again, the site had a number of key problems across various page types that were problematic. One was the page type I explained earlier, which not only couldn’t meet or exceed user expectations, but it also had the potential to lead users to more pages thinking they could find a solution.

I checked Google Analytics for landing page metrics and noticed one of the problematic urls had a 65% standard bounce rate. That means 35% of users visited another page on the site (which accounts for nearly 20K users during the timeframe). I wanted to see where those users were visiting after the landing page. In other words, were they happy with their visit or were they banging their head against their monitor while endlessly roaming around the site in search for a solution?

So I jumped to Behavior Flow in Google Analytics and set my starting point as that landing page. And once I did, I immediately noticed a problem. And that’s pretty much what I thought I would see!

For this page, 65% dropped off after visiting the landing page during the timeframe I selected (pre-core update). That’s not surprising, but the 35% remaining on the site displayed a path of frustration that was hard to ignore.

The next step for almost all of the users, which is labeled the “first interaction” after the landing page, was to pages that seemed like they could provide a solution, but really couldn’t. So, the landing page couldn’t meet user expectations and then 35% of the users landing on that page ended up visiting another page that couldn’t meet expectations either. Not good, but we’re just getting started.

And then the second interaction was eye-opening. 68% of the users that clicked to a second page went back to the original page they visited from the search results! Yep, I was beginning to see user frustration in the visuals. They visited the landing page from the SERPs, couldn’t find what they were looking for, visited another page thinking they could find a solution, but couldn’t. So they went right back to the original landing page still looking for a solution!

But the fun doesn’t stop there. The third interaction reveals users visiting even more pages from that original landing page believing they could find a solution. 20% of the users visiting the original landing page were now three interactions into their visit, with no hope of finding a solution.

But we’re not done yet. I keep using the word “trap” in this post and the fourth interaction underscores why. To quickly recap, we have users landing on a page (mostly from the SERPs), visiting another page to find a solution, then jumping back to the original landing page, and then desperately visiting more pages to find a solution.

What do you think happened next??

Yep, they went back to the original landing page again! 14% of users were now four interactions into the site (but back at the original landing page). Ugh.

And if they haven’t had enough, some users were still jumping to other pages from the original landing page to find a solution. Oh, those poor people… :)

As you can imagine, this led to a horribly frustrating experience for those users. And over time, hundreds of thousands of users were going through this process. I’m not saying this is the sole reason the site got pummeled by two core updates (remember, there are “millions of baby algorithms” at play with core updates), but it sure doesn’t help matters.

Using behavior flow in Google Analytics enabled me to visualize the frustration that users were experiencing while entering specific landing pages (and most visitors were from Google organic). And that visualization can go a long way when explaining to site owners how certain things need to change on a site. I know this example did.

I’ll end this post with some final tips and recommendations based on going through the process of visualizing user frustration using Behavior Flow in Google Analytics. Again, the great thing is you can go through this process today, and it’s free.

Using Behavior Flow To Visualize Frustration: Tips and Recommendations

  • Identify high volume pages and dig into the data. Make sure you understand what’s going on user engagement-wise. Don’t assume all visits end with happy users.
  • You can also run a traffic drop report to identify the biggest drops to landing pages across the site based on a core update, and then dig into those pages.
  • Don’t just check a metric like standard bounce rate and make decisions. Dig deeper with other user engagement metrics and with additional functionality like Behavior Flow in Google Analytics.
  • Use specific landing pages as the starting point in Behavior Flow and visualize the journey through your site. This is how you can identify engagement traps like I covered in this post.
  • If you identify engagement traps, take action before Google takes action. Always try to meet or exceed user expectations based on query. Don’t wait until you drop based on a broad core update to fix user engagement problems, content quality, and more. Identify key problems with those pages and fix them as quickly as you can.
  • Track Behavior Flow over time for the pages you identify as engagement traps. You should see improvements in the data and through the visualizations in the reporting after you fix those problems.

Summary – “Hell hath no fury like a user scorned”
This case study demonstrated how you can back up SEO intuition by using Behavior Flow in Google Analytics. By using this underutilized feature in GA, you can view the journey users take through your site. And you can use a number of dimensions as the starting point, including landing page.

Behavior Flow can absolutely help you identify problematic situations and then effectively communicate that story to site owners (backed by data). And that can help drive change. Like I said earlier, the amazing part is that it’s freely available in Google Analytics. You can literally start checking this now. So fire up Google Analytics, find those engagement traps, and fix them soon. Take action before Google takes action.

GG

Filed Under: google, google-analytics, seo, tools, web-analytics

NOT taking the (canonical) hint: How to estimate a low-quality indexing problem by page type using Google Analytics, Search Console, SEMrush, and Advanced Query Operators

August 5, 2019 By Glenn Gabe Leave a Comment

During a recent crawl analysis and audit of a large-scale site that was negatively impacted by Google’s core updates, I surfaced an interesting SEO problem. I found many thinner and low-quality pages that were being canonicalized to other stronger pages, but the pages didn’t contain equivalent content. As soon as I saw that, I had a feeling what I would see next… since it’s something I have witnessed a number of times before.

Many of the lower-quality pages that were being canonicalized were actually being indexed. Google was simply ignoring rel canonical since the pages didn’t contain equivalent content. That can absolutely happen and I documented that in a case study a few years ago. And when that happens on a large-scale site, thousands of lower-quality pages can get indexed (without the site owner even knowing that’s happening).

For example, imagine a problematic page type that might account for 50K, 100K, or more pages indexed. And when Google takes every page indexed into account when evaluating quality, you can have a big problem on your hands.

In the screenshot below, you can see that Google was ignoring the user-declared canonical and selecting the inspected url instead. Not good:

Google ignoring rel canonical.

In addition to just getting indexed, those lower-quality pages might even be ranking in the search results for queries and users could be visiting those pages by mistake. Imagine they are thin, lower-quality pages that can’t meet or exceed user expectations. Or maybe they are ranking instead of the pages you intend to rank for those queries. In a case like that, the problematic pages are the ones winning in the SERPs for some reason, which is leading to a poor user experience. That’s a double whammy SEO-wise.

Quickly Estimating The Severity Of The Problem
After uncovering the problem I mentioned above, I wanted to quickly gauge how bad of a situation my client was facing. To do that, I wanted to estimate the number of problematic pages indexed, including how many were ranking and driving traffic from Google. This would help build a case for handling the issue sooner than later.

Unfortunately, the problematic pages weren’t all in one directory so I needed to get creative in order to drill into that data (via filtering, regex, etc.) This can be the case when the urls contain certain parameters or patterns of characters like numerical sequences or some other identifying pattern.

In this post, I’ll cover a process I used for roughly discovering how big of an indexing problem a site had with problematic page types (even when it’s hard to isolate the page type by directory). The process will also reveal how many pages are currently ranking and driving traffic from Google organic. By the end, you’ll have enough data to tell an interesting SEO story, which can help make your case for prioritizing the problem.

A Note About Rel Canonical – IT’S A HINT, NOT A DIRECTIVE
For any site owner that’s mass-canonicalizing lower-quality pages to non-equivalent pages, then this section of my post is extremely important. For example, if you think you have a similar situation to what I mentioned earlier and you’re saying, “we’re fine since we’re handling the lower-quality pages via rel canonical…”, then I’ve got some potentially bad news for you.

As mentioned earlier, rel canonical is just a hint, and not a directive. Google can, and will, ignore rel canonical if the pages don’t contain equivalent content (or extremely similar content). Again, I wrote a case study about that exact situation where Google was simply ignoring rel canonical and indexing many of those pages. Google’s John Mueller has explained this many times as well during webmaster hangout videos and on Twitter.

And if Google is ignoring rel canonical on a large-scale site, then you can absolutely run into a situation where many lower-quality or thin pages are getting indexed. And remember, Google takes all pages indexed into account when evaluating quality on a site. Therefore, don’t just blindly rely on rel canonical. It might not work out well for you.

Walking through an example (based on a real-world situation I just dealt with):
To quickly summarize the situation I surfaced recently, there are tens of thousands of pages being canonicalized to other pages on the site that aren’t equivalent content-wise. Many were being indexed since Google was ignoring rel canonical. Unfortunately, the pages weren’t located in a specific directory, so it was hard to isolate them without getting creative. The urls did contain a pattern, which I’ll cover soon.

My goal was to estimate how many pages were indexed and how many were ranking and driving traffic from Google organic. Remember, just finding pages ranking and driving traffic isn’t enough, since there could be many pages indexed that aren’t ranking in the SERPs. Those are still problematic from an SEO standpoint.

The data would help build a case for prioritizing the situation (so my client could fix the problem sooner than later). It’s a large-scale with many moving parts, so it’s not like you can just take action without making a strong case. Volatility-wise, the site was impacted by a recent core update and there could be thousands (or more) lower-quality or thin pages indexed that shouldn’t be.

With that out of the way, let’s dig in.

Gauging The Situation & The Limits Of GSC
To gauge the situation, it’s important to understand how big of a problem there is currently and then form a plan of attack for properly tackling the issue. In order to do this, we’ll need to rely on several tools and methods. If you have a smaller site, you can get away with just using Google Search Console (GSC) and Google Analytics (GA). But for larger-scale sites, you might need to get creative in order to surface the data. I’ll explain more about why in the following sections.

Index Coverage in GSC – The Diet Coke of indexing data.
The index coverage reporting in GSC is awesome and enables you to view a number of important reports directly from Google. You can view errors, warnings, pages indexed, and then a list of reports covering pages that are being excluded from indexing. You can often find glaring issues in those reports based on Google crawling your site.

Based on what we’re trying to surface, you might think you can go directly to the Valid (and Indexed) report, export all pages indexed, then filter by problematic page type, and that would do the trick. Well, if you have a site with less than 1,000 pages indexed, you’re in luck. You can do just that. But if you have more than 1,000 pages, then you’re out of luck.

GSC’s Index Coverage reporting only provides 1,000 urls per report and there’s no API (yet) for bulk exporting data. Needless to say, this is extremely limiting for large-scale sites. To quote Dr. Evil, it’s like the Diet Coke of indexing data. Just one calorie… not thorough enough.

Search Console API & Analytics Edge
Even though exporting urls from the Valid (and Indexed) category of index coverage isn’t sufficient for larger-scale sites, you can still tap into the Search Console API to bulk export Search Analytics data. That will enable you to export all landing pages from organic search over the past 16 months that have impressions or clicks (basically pages that were ranking and driving traffic from Google). That’s a good start since if a page is ranking in Google, it must be indexed. We still want data about pages indexed that aren’t ranking, but again, it’s a start.  

My favorite tool for bulk exporting data from GSC is Analytics Edge. I’ve written about Analytics Edge multiple times and you should definitely check out those posts to get familiar with the Excel plugin. It’s powerful, quick, reasonably priced, and works like a charm.

For our situation, it would be great to find out how many of those problematic pages are gaining impressions and clicks in Google organic. Since the pages are hard to isolate by directory or site section, we can export all landing pages from GSC and then use Excel to slice and dice the data via filtering. You can also use the Analytics Edge Core Add-in to use regex while you’re in the process of exporting data (all in one shot). More about that soon.

Exporting landing page data from GSC via Analytics Edge

A Note About Regex For Slicing And Dicing The Data
Since the pages aren’t in one directory, using regular expressions (regex) is killer here. Then you can filter using regular expressions that target certain url patterns (like isolating parameters or a sequence of characters). To do this, you can use the Analytics Edge Core Plugin in conjunction with the Search Console connector so you can export the list of urls AND filter by a regular expression all in one macro.

I won’t cover how to do that in this post, since that can be its own post… but I wanted to make sure you understood using regex was possible.

You can also use Data Studio and filter based on regular expressions (if you are exporting GSC data via Data Studio). The core point is that you want to export all pages from GSC that match the problematic page type. That will give you an understanding of any lower-quality pages ranking and driving traffic (that match the page type we are targeting).

Now let’s get more data about landing pages driving traffic from Google organic via Google Analytics.

Google Analytics with Native Regex
In order to find all problematic page types that are driving traffic from Google organic, fire up GA and head to Acquisition, All Traffic, and then Source/Medium. This will list all traffic sources driving traffic to the site in the timeframe you selected. Choose a timeframe that makes sense based on your specific situation. For this example, we’ll select the past six months.

Then click Google/Organic to view all traffic from Google organic search during the timeframe. Now we need to dimension the report by landing page to view all pages receiving traffic from Google organic. Under Primary Dimension, click Other, then Commonly Used, and then select Landing Page. Boom, you will see all landing pages from Google organic.

Dimension by landing page.

But remember, we’re trying to isolate problematic page types. This is where the power of regular expressions comes in handy (as mentioned earlier). Unlike GSC, Google Analytics natively supports regex in the advanced search box, so dust off those regex skills and go to town.

Let’s say all of the problematic page types have two sets of five-digit numbers in the url. They aren’t in a specific directory, but both five-digit sequences do show up in all of the urls for the problematic page type separated by a slash. By entering a regular expression that captures that formula, you can filter your report to return just those pages.

For this example, you could use a regex like:
\d{5}.\d{5}

That will capture any url that contains five digits, any character after that (like a slash), and then five more digits. Now all I need to do is export the urls from GA (or just document the number of urls that were returned). Remember, we’re just trying to estimate how many of those pages are indexed, ranking and/or driving traffic from Google. The benefit of exporting is that you can send them through to your dev team so they can further investigate the urls that are getting indexed by mistake.

Filtering landing pages by regex in Google Analytics

Note, you can also use Analytics Edge to bulk export all of your landing pages from Google Analytics (if it’s a large-scale site with tens of thousands (or more) pages in the report. And again, you can combine the Analytics Edge Core Plugin with the GA connector to filter by regex while you are exporting (all in one shot).

Third-party tools like SEMrush
OK, now our case is taking shape. We have the number of pages ranking and driving traffic from Google organic via GSC and GA. Now let’s layer on even more data.

Third-party search visibility tools provide many of the queries and landing pages for each domain that are ranking in Google organic. It’s another great data source for finding pages indexed (since if the pages are ranking, they must be indexed).

You can also surface problematic pages that are ranking well, which can bolster your case. Imagine a thin and/or lower-quality page ranking at the top of the search results for a query, when another page on your site should be there instead. Examples like this can drive change quickly internally. And you can also see rankings over time to isolate when those pages started ranking, which can be helpful when conveying the situation to your dev team, marketing team, CMO, etc.

For example, here’s a query where several pages from the same site are competing in the SERPs. You would definitely want to know this, especially if some of those urls were lower-quality and shouldn’t be indexed. You can also view the change in position during specific times.

Viewing lower-quality pages ranking in the SERPs via SEMrush

For this example, we’ll use one of my favorite SEO tools, SEMrush. Once you fire up SEMrush, just type in the domain name and head to the Organic Research section. Once there, click the Pages report and you’ll see all of the pages that are ranking in Google from that domain (that SEMrush has picked up).

Note, you can only export a limited set of pages based on your account level unless you purchase a custom report. For example, I can export up to 30K urls per report. That may be sufficient for some sites, while other larger-scale sites might need more data. Regardless, you’ll be gaining additional data to play with including the number of pages ranking in Google for documentation purposes (which is really what we want at this stage).

You can also filter urls directly in SEMrush to cut down the number of pages to export, but you can’t use regex in the tool itself. Once you export the landing pages, you can slice and dice in Excel or other tools to isolate the problematic page type.

Query Recipes – Hunting down rough indexing levels via advanced search operators
OK, now we know the number of pages indexed by understanding how many pages are ranking or receiving traffic from Google. But that doesn’t tell us the number of pages indexed that aren’t ranking or driving traffic. Remember, Google takes every page indexed into account when evaluating quality, so it’s important to understand that number.

Advanced query operators can be powerful for roughly surfacing the number of pages indexed that match certain criteria. Depending on your situation, you can use a number of advanced search query operators together to gauge the number of pages indexed. For example, you can create a “query recipe” that surfaces specific types of pages that are indexed.

It’s important to understand that site commands are not perfectly accurate… so you are just trying to get a rough number of the pages indexed by page type. I’ve found advanced search queries like this very helpful when digging into an indexing problem.

So, you might combine a site command with an inurl command to surface pages with a certain parameter or character sequences that are indexed. Or maybe you combine that with an intitle command to include only pages with a certain word or phrase in the title. And you can even combine all of that with text in quotes if you know a page type contains a heading or text segment in the page content. You can definitely get creative here.

If you repeat this process to surface more urls that match a problematic page type, then you can get a rough number of pages indexed. You can’t export the data, but you can get a rough number to add to your total. Again, you are building a case. You don’t need every bit of data.

Here are some examples of what you can do with advanced query operators:

Site command + inurl:
site:domain.com inurl:12/2017
site:domain.com inurl:pid=

Site command + inurl + intitle
site:domain.com inurl:a5000 intitle:archive
site:domain.com inurl:tbio intitle:author

Site command + inurl + intitle + text in quotes
site:domain.com inurl:c700 intitle:archive “celebrity news”

Using advanced query operators enable you to gain a rough estimate of the number of pages. You can jot down the number of pages returned for each query as you run multiple searches. Note, you might need to run several advanced queries to hunt down problematic page types across a site. It can be a bit time-consuming, and you might get flagged by Google a few times (by being put in a “search timeout”), but they can be helpful:

Using advanced query operators to hunt down low-quality pages that are indexed.

Summary – Using The Data To Build Your Case
We started by surfacing a problematic page type that was supposed to be canonicalized to other pages, but was being indexed instead (since the pages didn’t contain equivalent content). Google just wasn’t taking the hint. So, we decided to hunt down that page type to estimate how many of those urls were indexed to make a case for prioritizing the problem.

Between GSC, GA, SEMrush, and advanced query operators, we can roughly understand the number of pages that are indexed, while also knowing if some are ranking well in Google and driving traffic. In the real-world case I just worked on, we found over 35K pages that were lower-quality and indexed. Now my client is addressing the situation.

By collecting the necessary data (even if some of it is rough), you can tell a compelling story about how a certain page type could be impacting a site quality-wise. Then it’s important to address that situation correctly over the long-term.

I’m sure there are several other ways and tools to help with understanding an indexing problem, but this process has worked well for me (especially when you want to quickly estimate the numbers). So, if you ever run into a similar situation, I hope you find this process helpful. Remember, rel canonical is just a hint… and Google can make its own decisions. And that can lead to some interesting situations SEO-wise. It’s important to keep that in mind.

GG

Filed Under: google, google-analytics, seo, tools, web-analytics

Beyond The 1K Limit – How To Bulk Export Data From GSC By Search Appearance Via Analytics Edge (including How-to, Q&A, and FAQ)

June 11, 2019 By Glenn Gabe Leave a Comment

Google has been releasing new features in the search results more and more recently that can have a big impact on SERP treatment, click-through rate, and potentially traffic. Three of those features are part of Google’s “best answer carousels” and include Q&A, How-to, and FAQ snippets. Q&A has been live for a while already, while How-to and FAQ were just rolled out during Google I/O. Note, you can read my post about How-to snippets to learn more about how they work, what they look like, etc.

I have several clients heavily using these formats, so I’ve been analyzing their performance via Google Search Console (GSC) recently — via the new enhancement reports and the Performance reporting. For example, once you start marking up pages using Q&A, How-to, or FAQ markup, you will see new reports show up under the Enhancements tab in GSC. And those reports can be very helpful for understanding errors, warnings, and valid pages.

How To Analyze Performance In GSC By Search Feature:
From a clicks, impressions, and CTR standpoint, you can check the Performance report to view your data over time. Note, if you have Discover data, then there will be two reports under Performance. The first will say Search Results and the second will be titled Discover.

Once in the Performance report (or the “Performance in search results” report), the Search Appearance tab enables you to drill into data by specific feature. You can see that the site from earlier has both How-to and Q&A results. If you click each category title, then you will be isolating that search feature in the reporting (i.e. the reports will be filtered by that search feature). So, you can view queries, landing pages, etc. for just Q&A or How-to results. This applies to AMP as well.

The 1K Row Limit In GSC. The Bane Of A Site Owner’s Existence
Filtering the reporting by search feature is powerful, but remember, GSC only provides one thousand results by report in the web UI and you can only export those one thousand results. For smaller sites, that should be fine. But for larger-scale sites with thousands, tens of thousands, or more listings, then the reporting can be extremely limiting.

For situations like that, what’s a site owner to do??

Analytics Edge To The Rescue Again. Exporting Beyond 1K Results By Search Feature:
I’ve written several posts about Analytics Edge before, and it’s still my go-to tool for exporting data in bulk from GSC. It’s a powerful Excel plugin that enables you to quickly and efficiently export bulk data from GSC, GA, and more.

Below, I’ll take you step-by-step through exporting your data by Search Appearance from GSC. If you’re a large-scale site that’s using Q&A, How-to, FAQ, and even AMP, then you’re going to dig this. Let’s jump in.

How to use Analytics Edge to bulk export data by Search Appearance:
Note, this will be a two-phase approach. The first run will enable us to pull all Search Appearance codes for the specific property in GSC. Then the second run will enable us to pull all data by that Search Appearance code.

Phase One:

  1. Download and install the Analytics Edge free or core add-in. There’s a free trial for the core add-in if you wanted to simply test it out. But the free add-in will work as well (just with less functionality). After installing the add-in, you should register it in Excel.
  2. Next, install the Search Console connector by clicking the License/Update button in the menu. You can watch this short video to learn how to install connectors. You can click the Google Search row to pull up the connector details (where you can choose to install that connector).
  3. Once you install Analytics Edge and the Search Console connector, access the options in the Analytics Edge menu at the top of Excel. Click the Google Search drop-down and select Accounts. This is where you will connect Analytics Edge with the Google account(s) you want to download data from. Go through the process of connecting the Google account you want to work with. You can also make one account the default, which will save you time in the future.
  4. Once you connect your account, click Google Search in the Connectors section, and then Search Analytics.
  5. Name your Macro and click OK.
  6. Select an account and then a property from GSC.
  7. We will use a two-phase approach for exporting data by Search Appearance. First, we are going to view the various options we have under Search Appearance (there will be codes that show up representing each SERP feature available to a property). We will use these codes during our second run to pull all data for each specific search feature (like How-to, Q&A, FAQ, AMP, etc.)
  8. Under the Fields tab, select searchAppearance, which will move that option to the Dimensions window.
  9. For the Dates tab, you can leave “Last 3 Months” active (which is the default).
  10. Leave everything else the same and click “Finish”.
  11. Analytics Edge will return all of the possible Search Appearance codes for the site for the time period you selected. For example, in the screenshot below, there were impressions and/or clicks for AMP (article and non-rich results), Q&A, How-To, and others for the property I selected.
  12. Copy the codes from the searchAppearance column to a text file so you can reference them in phase two of our tutorial. You will need these codes to export all data by that specific search feature.

Phase Two:

  1. Now we are going to use the searchAppearance codes to export data in bulk for a specific search feature. Click Google Search again, and then Search Analytics. Choose an account and a property again. When you get to the options screen, select the dimensions you want to export (in the Fields tab). For this example, let’s select query (to see the queries yielding How-to snippets in the search results).
  2. Next, go to the Filters tab and find the Appearance field. In that field, enter TPF_HOWTO, which is the code for How-to snippets. If you want to export data for another search feature, just use that code instead. Exporting how-to snippets from GSC.
  3. Next, select the dates you want to run the report for. For this example, I’ll select “Last 28 days”.
  4. Then under Sort/Count, select clicks and then descending in the “sort by” dropdown. This will sort the table by the queries with the most clicks over the past 28 days (that yield How-to snippets).
  5. Then click “Finish”.
  6. Analytics Edge will run and export all of the queries yielding How-to snippets. This can take a bit of time (from a few seconds to a minute or longer) depending on how large the site is and how much data needs to be exported. Note, just a sample of data will be presented in memory in the worksheet (and highlighted in green). You need to “write to worksheet” to show all of the data.
  7. To do that click File and then “Write Worksheet”. Name the worksheet and click OK. You will now see a new worksheet containing all of your data. For this example, I see 26K+ queries that have yielded How-to snippets over the past 28 days. Yep, over 26K!
  8. Congratulations! You just exported search feature data from GSC and blew by the 1K row limit!

Tracking, Using, and Learning From The Data:
Once you export your data by Search Appearance in bulk, you will have full access to all of the queries yielding Q&A, How-To, FAQ snippets, AMP, and more. You can track their position, double check the SERPs to understand the SERP treatment for each feature, understand the click-through rate for each snippet, and more.

For example, you might find that “list treatment” for How-tos is yielding a higher click through rate than the carousel treatment. Or you might find that a certain How-to has a featured snippet in addition to the How-to snippet (a How-to/featured snippet combo). And then you can check metrics based on that situation. You get the picture!

You can see an example of a How-to/featured snippet combo below:

A How-to/featured snippet combo.

My recommendation is to export data by each search feature for the past 28 days and start digging into the data. Then regularly export the data (like weekly) to understand the changes over time. E.g. changes in metrics, SERP treatment, and more.

Summary – Free Yourself From the 1K Row Limit In GSC By Exporting SERP Feature Data Via Analytics Edge
Now that more and more features are hitting the SERPs, using a tool like Analytics Edge can help you export all of your data, versus just one thousand rows per report. And when you export all of the queries and landing pages per SERP feature, you can glean more insights from the data. If you are using AMP, How-to, Q&A, or FAQs, then I highly recommend exporting your data via a tool like Analytics Edge. I think you’ll dig it.

GG

Filed Under: google, seo, tools, web-analytics

  • 1
  • 2
  • 3
  • …
  • 5
  • Next Page »

Connect with Glenn Gabe today!

Latest Blog Posts

  • Google Search Console (GSC) reporting for Soft 404s is now more accurate. But where did those Soft 404s go?
  • Google’s December 2020 Broad Core Algorithm Update Part 2: Three Case Studies That Underscore The Complexity and Nuance of Broad Core Updates
  • Google’s December 2020 Broad Core Algorithm Update: Analysis, Observations, Tremors and Reversals, and More Key Points for Site Owners [Part 1 of 2]
  • Exit The Black Hole Of Web Story Tracking – How To Track User Progress In Web Stories Via Event Tracking In Google Analytics
  • Image Packs in Google Web Search – A reason you might be seeing high impressions and rankings in GSC but insanely low click-through rate (CTR)
  • Google’s “Found on the Web” Mobile SERP Feature – A Knowledge Graph and Carousel Frankenstein That’s Hard To Ignore
  • Image Migrations and Lost Signals – How long before images lose signals after a flawed url migration?
  • Web Stories Powered by AMP – 12 Tips and Recommendations For Creating Your First Story
  • Visualizing The SEO Engagement Trap – How To Use Behavior Flow In Google Analytics To View User Frustration [Case Study]
  • The May 2020 Google Core Update – 4 Case Studies That Emphasize The Complexity Of Broad Core Algorithm Updates

Web Stories

  • Google’s Disqus Indexing Bug
  • Google’s New Page Experience Signal

Archives

  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • GSQi Home
  • About Glenn Gabe
  • SEO Services
  • Blog
  • Contact GSQi
Copyright © 2021 G-Squared Interactive LLC. All Rights Reserved. | Privacy Policy

We are using cookies to give you the best experience on our website.

You can find out more about which cookies we are using or switch them off in settings.

The Internet Marketing Driver
Powered by  GDPR Cookie Compliance
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

This site also uses pixels from Facebook, Twitter, and LinkedIn so we publish content that reaches you on those social networks.

Please enable Strictly Necessary Cookies first so that we can save your preferences!