Posted by Cyrus-Shepard
Note: This announce was co-authored by Cyrus Shepard and Rida Abidi.
Everyone wants to win Google featured snippets. Right?
At least, it used to be that way. Winning the boasted snippet normally conveyed extra congestion, in part because Google evidenced your URL twice: once in the boasted snippet and again in regular search results. For publishers, this was known as “double-dipping.”
All that changed in January when Google announced they would de-duplicate search results to show the featured snippet URL only once on the first sheet of results. No more double-dips.
Publishers obsessed because older studies showed triumphing peculiarity snippets drove less actual traffic than the “natural” top rank decision. With the brand-new alteration, triumphing the featured snippet might actually now to be translated into less traffic , not more.
This led many SEOs to ruminate: should be used opt-out of featured snippets wholly? Are featured snippets motiving publishers to lose more traffic than they potentially gain?
Here’s how we found the answer.
Working with the team at SearchPilot, we bequeathed an A/ B split test experiment to remove Moz Blog affixes from Google featured snippets, and calibrate the impact on traffic.
Using Google’s data-nosnippet tag, we recognized blog sheets with acquiring featured snippets and applied the tag to the main content of the page.
Our working hypothesis was that these pages would lose their peculiarity snippets and return to the “regular” search results below. A majority of us too expected to see a negative impact on traffic, but wanted to measure exactly how much, and identify whether the peculiarity snippets would return after we removed the call.
In this lesson, Moz lost the featured snippet almost immediately. The snippet was instead gifted to Content King and Moz returned to the top “natural” position.
Here is another example of what happened in search results. After launching the test, the peculiarity snippet was awarded to Backlinko and we returned to the top of the natural results.
One important thing to keep in mind is that, while these keywords triggered a featured snippet, pages can rank for hundreds or thousands of different keywords in different positions. So the impact of losing a single boasted snippet can be somewhat soothed when your URL ranks for many different keywords — some which make featured snippets and some which don’t.
After adding the data-nosnippet tag, our variant URLs swiftly lost their boasted snippets.How did this impact commerce? Instead of gaining traffic by opting-out of peculiarity snippets, we acquired we actually lost a significant amount of traffic quite quickly.
Overall, we quantified an estimated 12% drop in traffic for all altered sheets after losing peculiarity snippets( 95% confidence rank ).
This chart represents the cumulative repercussion of the test on organic traffic. The central off-color wrinkle is the best estimate of how the variance pages, with the deepen addrest, played compared to how we would have expected without any converts addrest. The off-color shaded sphere represents our 95% confidence interval: there is a 95% likelihood that the actual outcome is somewhere in this region. If this region is wholly above or below the horizontal axis, that represents a statistically significant test. What did we learn?
With the additive of the “data-nosnippet” attribute, the test had a significantly negative impact on organic congestion. In this experiment, owning the featured snippet and not ranking in the top causes stipulates more appraise to these pages in areas of clinks than not owning the featured snippet and ranking in the top reactions.
Adding in the “data-nosnippet” attribute , not only were we able to stop Google from gathering data in that section of the HTML page to use as a snippet, but we were also able to confirm that we would rank again in the SERP, whether that is ranking in position one or lower.
As an additional tool, we were also tracking keywords applying STAT Search Analytics. We were able to monitor changes in ranking for sheets that had featured snippets, and noticed that it made about seven days or more from the time of launching the test for Google to cache the changes we made and for the boasted snippets to be overtaken by another rank sheet, if another sheet was awarded a featured snippet spot at all. The turnaround was quicker after we dissolved the test, though, as some of these peculiarity snippets returned as quickly as the next day.
However, a negative aspect of leading this experiment was that, although some pages were slithered and indexed with the most recent changes, the featured snippet did not return and can already either been officially given to playing pages or never returned at all.
To summarize the significant procures of this assessment 😛 TAGEND Google’s nosnippet calls are enabled to opt-out publishers from boasted snippets.In this test, we valued an estimated 12% drop in traffic after losing boasted snippets.After resolution the test, we failed to win back a portion of the peculiarity snippets we previously ranked for.
For the vast majority of publishers winning the featured snippet likely remains the smart strategy. There are undoubtedly objections but as a general “best practice” if a keyword initiations a featured snippet, it’s typically in your best interest to grade for it.
What are your experiences with winning featured snippets? Let us know in the comments below.
Join Moz SEO Scientist, Dr. Pete Meyers, Wednesdays in April at 1:30 p.m. PT on Twitter and ask your most pressing questions about how to navigate SEO changes and challenges in a COVID-1 9 world. Tweet your questions all week long to @Moz using the hashtag # AskMoz.
Sign up for The Moz Top 10, a semimonthly mailer informing you on the top 10 hottest segments of SEO news, tip-off, and rad ties-in uncovered by the Moz team. Think of it as your exclusive digest of material you don’t have time to hunt down but want to read!
Read more: tracking.feedpress.it.