The Internet Marketing Driver

  • GSQi Home
  • About Glenn Gabe
  • SEO Services
    • Algorithm Update Recovery
    • Technical SEO Audits
    • Website Redesigns and Site Migrations
    • SEO Training
  • Blog
    • Web Stories
  • Contact GSQi

Archives for November 2013

How To Track Unconfirmed Panda Updates in a Quasi Real-time World

November 21, 2013 By Glenn Gabe 17 Comments

Cloaked Panda Updates

In June I wrote an important blog post, based on the amount of Panda work I help companies with.  The post was about the maturing of Panda, and how Google planned to roll out the algorithm update once per month, while taking up to ten days to fully roll out.  Google also explained that it will not confirm future Panda updates.  After hearing both statements, I couldn’t help but think the new Panda could lead to serious confusion for many webmasters.  And I was right.

Let’s face it, Panda was already confusing enough for the average business owner.  Whenever I speak with Panda victims (which is often), I joke that Panda should have been titled “Octopus” instead.  That’s because there are many tentacles to Panda.  There are a number of reasons a site could get hit, and a deep analysis is often needed to determine what happened, why, and how to rectify the situation.  Sure, Panda focuses on “content quality”, but that could mean several things based on the nature of the website.

For example, I’ve seen affiliate content get hit, doorway pages, scraped content, heavy cross-linking of company-owned domains, duplicate content, thin content with over-optimized titles and metadata, etc.   And then you have technical problems that could cause content problems.  For example, code glitches that replicate content across large sites.  Those technical problems could impact thousands of pages (or more), and it’s one of the reasons I start every Panda engagement with a deep SEO technical audit.  What I find often helps me track down Panda problems, while having the added benefit of identifying other technical problems that can be fixed (and sometimes quickly).

Webmaster Confusion – My Prediction About The New Panda Was Unfortunately Spot-on
Believe me, I don’t want to pat myself on the back about my prediction, because I wish I was wrong.  But I have received many emails from webmasters since June that signal there is serious confusion with the new algorithm update.  And I totally get it.

An example of a Panda hit:
A Typical Panda Hit

For example, if you wake up one morning and see a big drop in Google organic traffic, but have no idea why, then you might start researching the problem.  And when Google doesn’t confirm a major update like Panda, stress and tension increase.   That leads you to various blog posts about Google traffic drops, which only cause more confusion.  Then you grab your SEO, your marketing team, and developers, and hit a war room in your office.  Diagrams are drawn on large white boards, finger pointing begins, and before you know it, you’re in the Panda Zone, or a state of extreme volatility that can drive even the toughest CEO mad.  I know this because I have seen it first-hand many times with companies hit by Panda, from large companies to small.

 

Have There Been Additional Panda Updates As Expected?
Yes, there have been.  They’re just not easy to spot unless you have access to a lot of data.  For example, it’s easier to see the pattern of Panda updates when you are helping a number of companies that were impacted by our cute, black and white friend.  If those companies have been working hard on rectifying their Panda problems, then some may recover during the new, stealthy Panda updates.  Fortunately, I help a number of companies that were impacted by Panda, so I’ve been able to catch a glimpse of the cloaked Panda.

The Last Confirmed Panda Update in July 2013

The last confirmed Panda update was July 18, 2013, even though that was after Google said it wouldn’t confirm any more Panda updates.  Go figure.  And by the way, I have data showing the update began closer to July 15.  Regardless, that was the last confirmed date that Panda rolled out.  But we know that wasn’t the last Panda update, as the algorithm graduated to quasi real-time.  I say “quasi” real-time, but some people incorrectly believe that Panda is continually running (part of the real-time algorithm).  That’s not true, and a recent webmaster hangout video explained more about the new Panda. Check 22:58 through 25:20 in the video to watch John Mueller from Google explain more about how Panda is currently handled.

In the video, John explains that Panda is not real-time.  Yes, read that again.  It is not real-time, but it has progressed far enough along that Google trusts the algorithm more.  That means the typical, heavier testing prior to rollout isn’t necessary like it once was.  Therefore, Google feels comfortable unleashing Panda on a more regular basis (once per month), and Matt Cutts explained that it could take ten days to fully roll out.

This is important to understand, since you cannot be hit by Panda at any given time during the month.  But, you can be impacted each month (positively or negatively) during the ten day rollout.  Based on what I have seen, Panda seems to roll out in the second half of each month.  July was closer to the middle of the month, where the August update was closer to the end of the month.  Here’s a quick timeline, based on Panda clients I have been helping.

Recent Undocumented Panda Sightings:
Cloaked Panda 1 – Monday, August 26

Unconfirmed Panda Update in August 2013


Cloaked Panda 2 – Monday, September 16

Unconfirmed Panda Update in September 2013

Those are two dates I saw recoveries with several Panda clients.  Also, if we start with the confirmed July update (which I saw starting on the 15th), you can see all three updates were during the second half of the month.  That could be random, but it might not be.

Regarding your own website and identifying impact from the new Panda, we need to remember the details of our new, stealthy friend.  If Google is telling us the truth, then it could take ten days for sites to see the impact from Panda.  So if you took a major hit near one of those dates, then you very well could have been hit by Panda.  And again, someone reviewing your site through the lens of Panda would be able to confirm if any content factors were at play (like I mentioned earlier).  That’s why a thorough Panda audit is so important.

Also, Panda hits are typically very apparent.  They aren’t usually slight increases or decreases.  Remember, Google is continually pushing smaller updates to its real-time algorithm, so it’s natural to see slight increases or decreases over time.  But significant changes on a specific date could signal a major algorithm update like Panda or Penguin.

Tips for Tracking Cloaked Panda Updates:
Now, you might be reading this post and saying, “Thanks for the dates Glenn, but how can this help me in the future?”  Great question, and there’s no easy answer.  Remember, the new Panda is hard to spot, and the Panda gatekeepers (Google) won’t tip you off about when it’s released.  But there are some things you can do to monitor the situation, and to hopefully understand when Panda rolls out.  I have provided some tips below.

1. Know Your Dates
First and foremost, identify the exact date of a drop or boost in traffic in order to tie that date with potential algorithm updates.  This goes for Panda, and other algorithm updates like Penguin.   It’s critically important to know which algorithm update hit you, so you can target the correct path to recovery.  For example, Panda targets content quality issues, while Penguin targets unnatural inbound links.  And then there was Phantom, which also targeted low-quality content.

Moz has an algorithm change history, which can be very helpful for webmasters.  But, it’s hard for Moz to add the stealthy Panda updates, since Google isn’t confirming them.  Just keep that in mind while reviewing the timeline.

Moz Algorithm Update History

2. Visit Webmaster Forums
Monitor Google Webmaster Forums to see if others experienced similar effects during the same date.  For example, when Penguin hits, you can see many webmasters explaining the same situation in the forums.  That’s a clear sign the algorithm update was indeed rolled out.  Now, Google is continually updating its algorithm, and sites can be impacted throughout the month or year.  So you must identify the same date and the same type of update.  It’s not foolproof, but might help you track down the Loch Ness Panda.

Google Webmaster Forums

3. Connect With SEOs Well-Versed in Panda
Follow and engage SEOs that are focused on algorithm updates and who have access to a lot of data.  And keep an eye on the major industry blogs and websites.  SEOs that are well-versed in algorithm work have an opportunity to analyze various updates across industries and geographic locations.  They can often see changes quickly, and confirm those changes with data from similar websites and situations.

4. Take Action with an SEO Audit
Have an SEO Audit conducted through the lens of a specific algorithm update. The audit can help you confirm content quality problems that Panda could have targeted.  I’ve said this a thousand times before, but a thorough technical SEO audit is worth its weight in gold.  Not only can it help you understand the problems impacting your site Panda-wise, but you will undoubtedly find other issues that can be fixed relatively quickly.

So, you can better identify what happened with your site, you’ll have a roadmap for Panda recovery (if applicable), while cleaning up several other technical problems that could also be causing SEO issues.  During my career, I’ve seen many webmasters spinning their wheels working on the wrong SEO checklist.  They spent months trying to fix the wrong items, only to see no change at all in their Google organic trending.  Don’t let this happen to you.

5. Check Algorithm Tracking Tools
Monitor the various algorithm weather report tools like MozCast and Algoroo, which can help you identify SERP volatility over time.  The tools by themselves won’t fix your problems, but they can help you identify when the new Panda rolls out (or other major algorithm updates).

Mozcast Algorithm Weather Report

It’s Only Going to Get Worse and More Confusing
I wish I could tell you that the situation is going to get better.  But it isn’t.  Panda has already gone quasi real-time, but other algorithm updates will follow.  I do a lot of Penguin work, and once Google trusts that algorithm more, it too will launch monthly and without confirmation.  And then we’ll have two serious algorithm updates running monthly with no confirmation.

And who knows, maybe Panda will actually be part of the real-time algorithm at that point.  Think about that for a minute… two major algo updates running throughout the month, neither of them confirmed, and webmasters losing traffic overnight.   Yes, chaos will ensue.  That’s even more reason for business owners to fix their current situation sooner than later.

By the way, if you take a step back and analyze what Google is doing with Panda, Penguin, Pirate, Above the Fold, etc., it’s incredibly powerful.  Google is crafting external algorithms targeting various aspects of webspam and then injecting them into the real-time algorithm.  That’s an incredibly scalable approach and should scare the heck out of webmasters that are knowingly breaking the rules.

Summary – Tracking Cloaked Pandas Can Be Done
Just because Google hasn’t confirmed recent Panda updates doesn’t mean they aren’t occurring.  I have seen what looks to be several Panda updates roll out since July.  Unfortunately, you need to be analyzing the right data (and enough data) in order to see the new, cloaked Panda.  The tips I provided above can help you better track Panda updates, even when Google won’t confirm each release.  And knowing a major algorithm update like Panda rolled out is critically important to understanding what’s impacting your website.  That’s the only way to form a solid recovery plan.

So, from one Panda tracker to another, may the algorithmic wind always be at your back, keep your eyes peeled, stay caffeinated, and monitor the wounded.  Good luck.

GG

 

 

 

 

Filed Under: algorithm-updates, google, seo

A Double Penguin Recovery (via 2.1 Update) – But Does It Reveal A Penguin Glitch?

November 12, 2013 By Glenn Gabe 6 Comments

Summary: I analyzed the first double Penguin recovery I have come across during my research (after the Penguin 2.1 update). But what I found could reveal a glitch in the Penguin algorithm. And that glitch could be providing a false sense of security to some business owners.

Double Penguin Recovery is a Double-Edged Sword

If you have followed my blog and Search Engine Watch column, then you know I do a lot of Penguin work.  I started heavily analyzing Penguin 1.0 on April 24, 2012, and have continued to analyze subsequent Penguin updates to learn more about our icy friend.  I’ve had the opportunity to help many companies deal with Penguin hits, and have helped a number recover (and you can read more about those recoveries via several case studies I have written).  It’s been fascinating for sure.  But it just got even more interesting, based on analyzing a site that recovered during Penguin 2.1.   Read on.

Penguin 2.1 rolled out on October 4, 2013, and based on my analysis, it was bigger and badder than Penguin 2.0.   Matt Cutts confirmed that was the case during Pubcon (which was great to hear, since it backed up what I was seeing).  But as I documented in one of my recent Search Engine Watch columns, Penguin 2.1 wasn’t all bad.  There were recoveries, although they often get overshadowed by the carnage.  And one particular recovery during 2.1 caught my attention and deserved further analysis. That’s what I’ll cover in this post.

Ladies and Gentlemen, Introducing The Double Penguin Recovery
I believe it’s important to present the good and the bad when discussing Penguin updates, since there are still some people in the industry who don’t believe you can recover.  But you definitely can recover, so it’s important to document cases where companies bounce back after completing hard Penguin recovery work.

An example of a Penguin recovery:
Example of a Penguin Recovery

Now, there is one thing I hadn’t seen during my past research, and that’s an example of a company recovering twice from Penguin.  I’m not referring to a company that recovers once, gets hit again, and recovers a second time.  Instead, I’m referring to a company that initially recovers from Penguin, only to gain even more during a subsequent Penguin update.

Now that would be an interesting case to discuss… and that’s exactly what I saw during Penguin 2.1.  Interested?  I was too.  :)

Double Penguin Recoveries Can Happen
After Penguin 2.1, I analyzed a website that experienced its second Penguin recovery.  The site was first hit by Penguin 1.0 on April 24, 2012, and recovered in the fall of 2012.  And now, with 2.1 on 10/4/13, the site experienced another surge in impressions and clicks from Google Organic.

The second Penguin recovery on October 4, 2013:
Second Penguin Recovery During 2.1 Update

I’ve done a boatload of Penguin work since 2012, and I have never seen a double Penguin recovery.  So as you can guess, I nearly fell out of my seat when I saw the distinct bump on October 4, 2013.

Penguin Recoveries Lead To Penguin Questions
Based on the second recovery, the big questions for me (and I’m sure you as well), revolve around the reason(s) for the double recovery.  Why did this specific site see another surge from Penguin, when they already did in the past (after hard recovery work)?  Were there any specific factors that could have led to the second recovery?  For example, did they build more natural links, add high quality content, disavow more links, etc?  Or was this just an anomaly?  And most importantly, did Penguin help this website a second time, when it never should have?  In other words, was this a false negative (with the added benefit of a recovery)?  All good questions, and I hope to answer several of them below.

The Lack of Penguin Collateral Damage
I’ve always said that I’ve never seen collateral damage with Penguin.  Every site I’ve analyzed hit by Penguin (now 312), should have been hit.  I have yet to see any false positives.  But with this double recovery, we are talking about another angle with Penguin.  Could a site that shouldn’t see a recovery, actually recover?  And again, this site already recovered during a previous Penguin update.  Could this second recovery be a glitch in Penguin, or were there other factors at play?

History with Penguin
Let’s begin with a quick Penguin history for the website at hand.  It’s an ecommerce website that was devastated by Penguin 1.0 on April 24, 2012.   The site lost close to 80% of its Google Organic traffic overnight.

Initial Penguin Hit on April 24, 2012:
Initial Penguin Hit on April 24, 2012

The site had built thousands of exact match and rich anchor text links over years from spammy directories.  The link profile was riddled with spam.  After the Penguin hit on 4/24/12, their staff worked hard on removing as many links as they could, contacted many directory owners (with some success), and then disavowed what they could not manually remove.  Yes, the disavow tool was extremely helpful for this situation.

The site recovered relatively quickly from Penguin (within two months of finishing the recovery work). The site recovered to about 40% of its original traffic from Google Organic after the Penguin recovery.  That made sense, since the site had lost a majority of links that were once helping it rank for competitive keywords.  Now that the unnatural links were removed, the site would not (and did not) recover to full power.  That’s because it never should have ranked highly for many of the keywords in the first place.  And this is where the site remained until Penguin 2.1.

Initial Penguin recovery in 2012:
Initial Penguin Recovery in 2012

And Along Came Penguin 2.1
After Penguin 2.1 hit, the site experienced an immediate surge in impressions and traffic from Google Organic (and this was crystal clear to see in Google Webmaster Tools).  I’m not sure anyone was expecting a second Penguin recovery, but there it was…  as clear as day.

Impressions were up over 50% and clicks were up close to 60% (when comparing the timeframe after Penguin 2.1 to the timeframe prior).   Checking webmaster tools revealed extremely competitive keywords that were once targeted by Penguin, now gaining in average position, impressions, and clicks.  Certain keywords jumped by 10-15 spots in average position.  Some that were buried in Google were now on page one or page two.  Yes, Penguin 2.1 was providing a second shot in the arm for the site in question.

Impressions and Clicks Increased Greatly After Penguin 2.1 Recovery:
Increase in Impressions and Clicks After Penguin 2.1 Recovery

It was amazing to analyze, but I couldn’t stop several key questions from overpowering my brain.  What changed recently (or over time) that sent the right signals to Google?  Why would the site recover a second time from Penguin?  And could other websites learn from this in order to gain the infamous double Penguin recovery?  I dug into the site to learn more.

What Changed, Why a Second Recovery?
What you’re about to hear may shock you.  It sure shocked me.  Let’s start with what might be logical.  Since Penguin is hyper-focused on links, I reviewed the site’s latest links from across Google Webmaster Tools, Majestic SEO, and Open Site Explorer.

If the site experienced a second Penguin recovery, then I would assume that new links were built (and that they were a heck of a lot better than what got the site initially hit by Penguin).  Google Webmaster Tools revealed a doubling of inbound links as compared to the timeframe when the site first got hammered by Penguin (April 2012).  Majestic SEO and Open Site Explorer did not show as much movement, but did show an increase.

I exported all of the new(er) links and crawled them to double check anchor text, nofollow status, 404s, etc.  And I paid special attention to the links from Google Webmaster Tools, since it showed the greatest number of new links since the first Penguin recovery.  It’s also worth noting that Majestic showed a distinct increase in backlinks in early 2013 (and that includes both the raw number of links being created and the number of referring domains).

Backlinks History Reveals More Unnatural Links Built in Early 2013:
New Unnatural Links Built in Early 2013

Surely the natural, stronger linkbuilding was the reason the site experienced a double Penguin recovery, right?  Not so fast, and I’ll explain more about this next.  It seems Penguin might be glitchy.

More Unnatural Links = Double Penguin Recovery?  Crazy, But True
Believe me, I was really hoping to find stronger, natural links when checking the site’s latest link reporting.  But that wasn’t the case.  I found more spammy links from similar sources that got the site initially hit by Penguin in 2012.  Spammy directories were the core problem then, and they are the core problem now.  Actually, I could barely find any natural links in the new batch I checked.  And that was disturbing.

With all of my Penguin work (having now analyzed 312 websites hit by Penguin), I have yet to come across a false positive (a site that was hit that shouldn’t be hit).  But how about a site recovering that shouldn’t recover?  That’s exactly what this case looks like.  The site built more spammy links after initially recovering from Penguin, only to experience a surge in traffic during Penguin 2.1.  That’s two Penguin recoveries, and again, it’s the first time I have seen this.

The Danger of Heavily Relying on the Disavow Tool
To clarify, I don’t know if the site’s owner or marketing staff meant to build the newer spammy links.  Unnatural links tend to have an uncanny way of replicating across other low-quality sites.  And that’s especially the case with directories and/or article marketing.  So it’s possible that the older, spammy links found their way to other directories.

When You Disavow Links, They Still Remain (and Can Replicate):
The Danger of Relying on the Disavow Tool

This is why I always recommend removing as many links as possible versus relying on the disavow tool for all of them.  If you remove them, they are gone.  If you disavow them, they remain, and can find their way to other spammy sites.

What Does This Tell Us About Penguin?
To be honest, I’m shocked that Penguin was so wrong.  The initial Penguin recovery in 2012 was spot on, as the company worked hard to recover.  They manually removed a significant percentage of unnatural links, and disavowed the rest.  Then they recovered.  But now they experienced a second recovery, but based on the site building more unnatural links (and from very similar sources to the original unnatural links that got them hit in 2012).

So, is this a case of Penguin not having enough data on the new directories yet?  Also, did the company really test the unnatural link waters again by building more spammy links?  As mentioned above, I’ve seen spammy links replicate themselves across low-quality sites before, and that’s especially the case with directories and/or article marketing.  That very well could have happened, although it does look like the links were built during a specific timeframe (early 2013).  It’s hard to say exactly what happened.

Also, will the company eventually get hit by Penguin again (for a second time)?  It’s hard to say, but my guess is the surge in traffic based on Penguin 2.1 will be short-lived.  I cannot believe that the newer, unnatural links will go undetected by our icy friend.  I’m confident the site will get hit again (unless they move quickly now to remove and/or disavow the latest unnatural links).  Unfortunately, the site is teed up to get hit by Penguin.

Summary – Penguin 2.1 Was Wrong (for Now)
This was a fascinating case to analyze.  I have never seen a double Penguin recovery, and I have analyzed hundreds of sites hit by Penguin since April of 2012.  The website’s second recovery looks to be a mistake, as Penguin must have judged the new links as “natural” and “strong”.  But in reality the links were the same old spammy ones that got the site hit from the start.  They were just on different websites.

But as I said earlier, the site is now teed up to get hit by Penguin again. And if that happens, they will lose the power and traffic they have built up since recovering from the first Penguin attack.  If that’s the case, the site will have done a 360 from Penguin attack to Penguin recovery to second Penguin recovery and back to Penguin attack.  And that’s never a good place to be.

GG

Filed Under: algorithm-updates, google, seo

How Bing Pre-Renders Webpages in IE11 and How Marketers Can Use The Pre-Render Tag for CRO Today

November 2, 2013 By Glenn Gabe Leave a Comment

Bing, IE11, and Pre-rendering Webpages

Bing recently announced it is using IE11’s pre-render tag to enhance the user experience on Bing.com.   Pre-rendering enables Bing to automatically download the webpage for the first search result before you visit that page.  Note, this only happens for “popular searches”, and I’ll cover more about that below.  Pre-rendering via Bing means the destination page will load almost instantaneously when you click through the first search result.  Bing explained that over half of users click the first result, and using IE11’s pre-render tag can enhance the user experience by loading the destination page in the background, after the search is conducted.

A Quick Pre-Render Example:
If I search Bing for “Samsung” in IE11, the first result is the U.S. Samsung website.  When clicking through to the website, the first page loads immediately without any delay (including all webpage assets, like images, scripts, etc.)  Checking the Bing search results page reveals that Bing was using pre-render for the Samsung website homepage.  You can see this via the source code.  See the screenshots below.

Search Results and Sitelinks for Samsung

Checking the source code reveals Bing is pre-rendering the U.S. Samsung homepage:

Bing Source Code Pre-Render Tag

 

Yes, Google Has Been Doing This With “Instant Pages”
In case you were wondering, Google has been accomplishing this with “Instant Pages” in Chrome since 2011, but it’s good to see Bing roll out pre-rendering as well.  My guess is you’ve experienced the power of pre-rendering without even realizing it.  When Bing and Google have high confidence that a user will click the first search result, they will use the pre-render tag to load the first result page in the background.  Then upon clicking through, the page instantaneously displays.  That means no waiting for large photos or graphics to load, scripts, etc.  The page is just there.

Testing Bing’s Pre-Render in IE11
Once Bing rolled out pre-render via IE11, I began to test it across my systems.  When it kicked in, the results were impressive.  The first result page loaded as soon as I clicked through.  I was off and running on the page immediately.

But when did Bing actually pre-render the page and why did some search results not spark Bing to pre-render content?   Good questions, and I dug into the search results to find some answers.

Identifying Pre-rendering with Bing and IE11
During my testing, I began to notice a trend.  Pre-rendering was only happening when sitelinks were provided for a given search result.  So, if I searched for “apple ipad”, which Bing does not provide sitelinks for, then pre-rendering was not enabled.  But if I searched for just “Apple”, and Bing did provide sitelinks, then pre-render was enabled.  If I searched for “Acura”, sitelinks were provided for the branded search, and the first result was pre-rendered.

A Bing search for “Acura” yields sitelinks:
Search Results and Sitelinks for Acura

 

Checking the source code reveals Bing is pre-rendering the first search result for “Acura”:
Bing Source Code Pre-Render Tag for Acura

 

A Bing search for “Derek Jeter” does not yield sitelinks:
Bing Search Results for Derek Jeter
Checking the source code reveals Bing is not pre-rendering the first search result for “Derek Jeter”:
Bing Source Code for Derek Jeter Without Pre-render

 

So, Bing clearly needed high confidence that I would click through the first listing in order to use pre-render.  In addition, there was a high correlation between sitelinks and the use of the pre-render tag.  For example, “how to change oil” did not yield pre-rendering, “Derek Jeter” did not trigger pre-rendering, and “weather” did not trigger pre-rendering.  But “Firefox” did trigger sitelinks and the use of pre-render.

How Can You Tell If Pre-Rendering is Taking Place
You need an eagle eye like me to know.  Just kidding.  :)  I simply viewed the source code of the search result page to see if the pre-render tag was present.  When it was, you could clearly see the “url0=” parameter and the value (which was the webpage that was being pre-rendered).  You can see this in the screenshots listed above.

And for Chrome, you could check task manager and see if a page is being pre-rendered.  It’s easy to do and will show you if the page is being pre-rendered and the file size.

Using Chrome’s Task Manager to view Pre-rendered Pages
Using Chrome Task Manager to Check Pre-render

 

How Marketers Can Use Pre-Render On Their Own Websites for CRO Today
Yes, you read that correctly.  You can use pre-render on your own website to pre-load pages when you have high confidence that a user will navigate to that page.  I’m wondering how many Conversion Rate Optimization (CRO) professionals have tried that out!  Talk about speeding up the user experience for prospective customers.

Imagine pre-loading the top product page for a category, the first page of your checkout process, the lead generation form, etc.  Pre-rendering content is supported by Chrome, IE11, and Firefox, so you can actually test this out today.

I’ve run some tests on my own and the pre-rendered pages load in a flash.  But note, Chrome and IE11 support prerender, while Firefox supports prefetch.  That’s important to know if you’re a developer or designer.  Also, I believe you can combine prerender and prefetch in one link tag to support all three browsers, but I need to test it out in order to confirm the combination works.  Regardless, I recommend testing out pre-rendering on your own site and pages to see how it works.

You can analyze visitor paths and determine pages that overwhelmingly lead to other pages.  And when you have high confidence that a first page will lead to a second page, then implement the pre-render tag.  Heck, split test this approach!  Then determine if there was any lift in conversion based on using pre-render to speed up the conversion process.

Analyzing Behavior Flow in Google Analytics to Identify “Connected Pages”:
Analyzing Behavior Flow to Identify Connected Pages

 

An Example of Using Pre-Render
Let’s say you had a killer landing page that leads to several other pages containing supporting content.  One of those pages includes a number of testimonials from customers, and you notice that a high percentage of users click through to that page from the initial landing page.  Based on what I explained earlier, you want to quicken the load time for that second page by using pre-render.  Your hope is that getting users to that page as quickly as possible can help break down a barrier to conversion, and hopefully lead to more sales.

All that you would need to do is to include the following line of code in the head of the first document:

<link rel=”prerender” href=”http://www.yourdomain.com/some-page-here.htm” >

Note, that will work in Chrome and IE11.  If you combine prerender with prefetch, then I believe that will work across Chrome, IE11, and Firefox.

When users visit the landing page, the second page will load in the background.  When they click the link to visit the page, that page will display instantaneously.  Awesome.

 

Summary – Pre-Render is Not Just For Search Engines
With the release of IE11, Bing is starting to pre-render pages in the background when it has high confidence you will click the first search result.  And Google has been doing the same with “Instant Pages” since 2011.  Pre-rendering aims to enhance the user experience by displaying pages extremely quickly upon click-through.

But pre-render is not just for search engines.  As I demonstrated above, you can use the technique on your own pages to reduce a barrier to conversion (the speed at which key pages display for users on your website).  You just need to determine which pages users visit most often from other key landing pages, and then implement the pre-render tag.  And you can start today.  Happy pre-rendering.  :)

GG

 

Filed Under: bing, cro, google

Connect with Glenn Gabe today!

Latest Blog Posts

  • It’s all in the (site) name: 9 tips for troubleshooting why your site name isn’t showing up properly in the Google search results
  • Google Explore – The sneaky mobile content feed that’s displacing rankings in mobile search and could be eating clicks and impressions
  • Bing Chat in the Edge Sidebar – An AI companion that can summarize articles, provide additional information, and even generate new content as you browse the web
  • The Google “Code Red” That Triggered Thousands of “Code Reds” at Publishers: Bard, Bing Chat, And The Potential Impact of AI in the Search Results
  • Continuous Scroll And The GSC Void: Did The Launch Of Continuous Scroll In Google’s Desktop Search Results Impact Impressions And Clicks? [Study]
  • How to analyze the impact of continuous scroll in Google’s desktop search results using Analytics Edge and the GSC API
  • Percent Human: A list of tools for detecting lower-quality AI content
  • True Destination – Demystifying the confusing, but often accurate, true destination url for redirects in Google Search Console’s coverage reporting
  • Google’s September 2022 Broad Core Product Reviews Update (BCPRU) – The complexity and confusion when major algorithm updates overlap
  • Google Multisearch – Exploring how “Searching outside the box” is being tracked in Google Search Console (GSC) and Google Analytics (GA)

Web Stories

  • Google’s December 2021 Product Reviews Update – Key Findings
  • Google’s April 2021 Product Reviews Update – Key Points For Site Owners and Affiliate Marketers
  • Google’s New Page Experience Signal
  • Google’s Disqus Indexing Bug
  • Learn more about Web Stories developed by Glenn Gabe

Archives

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • August 2021
  • July 2021
  • June 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • GSQi Home
  • About Glenn Gabe
  • SEO Services
  • Blog
  • Contact GSQi
Copyright © 2023 G-Squared Interactive LLC. All Rights Reserved. | Privacy Policy
This website uses cookies to improve your experience. Are you ok with the site using cookies? You can opt-out at a later time if you wish. Cookie settings ACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience. You can read our privacy policy for more information.
Cookie Consent