Many of you know that I have been working for a long time on a study to attempt to directly measure the impact of Google+ Shares on Google’s search rankings. This study has attempted to measure “causation”, not “correlation” – i.e. to see if we could prove some impact on rankings from Google+ shares. This article presents the results of that work.
This edition of the study was launched at SMX Advanced where I presented results I had from an earlier study we had done that convinced me that links in shares from Google Plus and Facebook behave like traditional web based links. This was later disputed by Matt Cutts in his keynote interview by Danny Sullivan, leading to a live discussion with me on stage
The brief discussion we had led to my agreeing that I would rerun the study that we had done with some guidance from Matt on potential problems. While this input was limited in nature, it did lead to some ideas on how to improve the testing.
The goal of this effort was to measure the impact that Google Plus Shares have on discovery, indexing, and ranking of content, for results that are not personalized. The results of this new study are presented in this post.
And here is the shocking result in a nutshell. In our study, and in my opinion, Google Plus Shares did not drive any material rankings changes (of non-personalized results) that we could detect. To be fair, our study had some limitations – read on so you can judge for yourself if this test changes the way you think about Google Plus and SEO! In addition, there are others who have examined our same data and draw different conclusions. While you are here, take a moment and follow me on Google Plus:
If you want to see a panel of industry experts discuss the results in this study, I will be hosting a Google Plus Hangout on Air event (a live broadcast discussion) on Thursday September 19th at 4 PM ET (Boston time) with Mark Traphagen of Virante, Inc, Joshua Berg of RealSMO, Pete Myers from Moz, and Marcus Tober of Searchmetrics. In this session we are going to have representatives from the companies that have published related studies, all in one great hoe down to really air out what the results of our study mean, as well as the great studies that the other panelists have done on this topic.
Basic Study Structure
We picked 3 different sites to use in this test, all of which have been on the web for at least 2 years or more. The 3 sites were:
We then wrote 2 different relevant articles for each site. One of these was used as a “Test Page”, and the other was used as a “Baseline Page”. Both pages were implemented without any links to them from any source whatsoever. Both of them received an initial set of Google+ Shares, 6 for the Test Pages, and 40 or more for the Baseline Pages.
From there, the paths diverged. We provided the Baseline Page no further attention other than to monitor indexing and ranking behavior. For the Test Page, we sent additional shares in two waves:
- At least 25 shares on August 4th, 2013
- 4 more shares from very authoritative profiles between August 28th and September 1st.
Throughout the entire process we monitored ranking behavior for the pages on a number of different long tail search terms. The August 4th and late-August burst of shares is particularly important, because if Google+ Shares are in fact a direct ranking factor, there should be noticeable changes in rankings for the Test Pages after those shares occur. This is the basic premise of study was designed to test.
What Makes Our Study Different
Plain and simple, this is not a correlation study. For the record, I believe that correlation studies are extremely valuable, but as we know, correlation is not causation, and our study attempts to directly measure if Google Plus shares cause changes in ranking.
To accomplish this we restructured the study in several ways from the one that I presented at SMX Advanced. To start, we set the following goals:
- Eliminate accesses to the page by humans or 3rd party tools prior to discovery of the pages by Google.
- Minimize the risk of the test pages receiving links throughout the entire study.
- Check at every major step to see if there were any links implemented to the site.
- Pick a panel of handpicked participants to work with us on the study. The reason we did this was so we could have better control over their behavior.
Here is a more detailed look at the methodology:
- As noted above, we handpicked 3 sites for this phase of the study.
- We published 2 different articles (the Test Page and the Baseline Page) on each of the 3 different sites and implemented zero links to them. In addition, the pages were uploaded to the site by direct FTP upload to ensure that the site’s own publishing environment did not introduce any variables.
- The articles we had written for the study contained content which was highly relevant to the site on which it was placed.
- No Google programs were referenced in these pages including Google Analytics, Google Plus buttons, or anything else. The only links out from these pages is to other pages on the site on which they resided.
- We revealed the existence of the Test Pages and the Baseline Pages to no one. Until people were asked to share them, I (Eric Enge) was the only human being that knew the URLs for the pages.
- We hand-picked people to implement Google Plus shares to the pages in the study.
- All shares were done via the use of a Study Control Page. This page was used to keep participants from visiting the pages themselves, and I explain why we did that and how this works in more detail below.
- We sent Google Plus Shares to the Test Pages in 3 waves, as follows:
- An initial wave on 7/19/2013 of 6 shares
- A second wave on August 4th of 25 or more shares
- A final wave of 4 authoritative Google Plus profiles shared the content between 8/28/13 and 9/1/13
- We sent Google Plus Shares to the Baseline Pages in 1 wave, with all the shares taking place on 7/19/13.
- We worked hard to minimize or eliminate any visits to the pages, including asking the study participants to not visit them unless they agreed to use a Safari browser with no SEO plugins.
- At each stage, each share was accompanied by a strong disclaimer very similar to the following:
- All pages were monitored to verify that no links were implemented to the pages. Tools used to monitor links were Webmaster Tools, Majestic SEO, Open Site Explorer, and Ahrefs. As a final check, we also checked incoming referrers in Google Analytics.
- Results were tracked daily over the course of the study.
In short, the study methodology was designed to maximize the chances that the Google Plus Shares were the only possible factor that could result in content discovery, indexing, and/or rankings changes for the Test Pages.
How the Study Control Page Works
As mentioned above, people executed their Google Plus shares using a Study Control Page. The reason for this is that browser plugins could introduce variables into the test. For example, a browser may have a plugin installed that queries the Google Plus API resulting in Google learning about the page that way. This was particularly important during the part of the test where we were examining discovery behavior. The study control page used looks like this:
Participants were instructed to click the +1 button, and then to enter in the provided description. The presence of the description turned the +1 into a Google Plus Share.
Google Plus Shares Pass PageRank
The reason why many people believe that these shares can drive ranking is that the code for the link on the Google Plus pages does not include a NoFollow attribute. Here is an example of the code for one of the shares:
As you can see in the code above, the link is implemented in two pieces. The first is the image link, which does have the NoFollow attribute, and the second is the text link, which does not have the NoFollow attribute. This means that it passes PageRank!
Possible Sources of Error
There are three main possible sources of error:
1. Missing Links: It is possible that links were implemented to the pages that did not show up in our monitoring tools. This is not an insignificant potential problem, as by my estimate the cumulative links found by Open Site Explorer, Majestic SEO, Webmaster Tools, and Ahrefs is probably at best 50% of the total links to a site, and it may be as low as 30%.
I base these statements on my experiences with helping sites recover from link penalties. At Stone Temple Consulting we have helped more than 50 sites recover from penalties this year, and it has happened over and over again that we would help these sites by cleaning out bad links only to have Webmaster Tools report lots of new links the next time it was queried. The reported new links were not new and I have no doubt that Google knew about them before, but simply did not choose to include them in the Webmaster Tools report. However, once we cleaned out some of the bad ones, we got exposed to some more of the links residing in their database.
2. Ranking Churn: The study was vulnerable to general ranking movement and algo adjustments that our Baseline Pages did not enable us to perceive. This is also a pretty significant risk.
3. Other Environmental Factors: Let’s face it – what Google uses to rank search results involves hundreds of factors. There are lots of variables that could impact the results. In spite of our attempts to minimize people accessing the content, a small number of people chose to do so anyway. We nonetheless believe that the results have validity for reasons you will see explained below.
What follows are the raw results for the 3 sites participating in this study. Please note that in all the charts shown here a ranking of 100 really means: “not found in the top 100 results”.
The Test Page and Baseline Page tested on this site were articles written by two different users about notable online shopping experiences they had. For the Test Page, we monitored the results for 6 different search phrases.
The most notable result occurred for one particular search phrase, which was a long tail (6 word) phrase enclosed in “”. Note that the use of the “” makes this term even more long tail in nature than it would otherwise be. Here is a chart of the rankings over time for that particular phrase:
This is curious because the rankings got worse for 20 days after the burst of shares and then improved suddenly on August 24th. Since that time the ranking has held relatively steady even after the 4 authoritative shares took place. So what happened with the other terms? Let’s take a look:
These terms also see improvement on August 24th, but not quite as remarkable a change as we saw for the term previously highlighted.
How did the Baseline Pages do? I have broken that into two charts, simply so the terms that ranked high will be easier to examine, as otherwise the terms ranking in and around the 40th position would make it difficult to really perceive any movement in the terms ranking in the top 5 or so positions:
The results are pretty intriguing in that they do not show movement on August 24th, but they do show movement on August 31st.
Stone Temple Consulting Results
The Test Page and Baseline Page tested on this site were articles written by two different users about notable search engine experiences they had. For the Test Page, we monitored the results for 11 different search phrases.
As with BuyVia, there was one search phrase that showed pretty dramatic movement. It was also a long tail search phrase enclosed in “”. Here is the detail of the rankings of that phrase over time:
On this site, the rankings declined for 24 days after the August 4th burst of shares and then jumped up at that time. Since August 28th, the ranking has held relatively steady even after the 4 authoritative shares took place. So what happened with the other 10 terms? Let’s take a look:
These terms offer a very mixed bag of results, with some of them showing significant improvement on the 28th, and the others being relatively flat.
How did the Baseline Pages on this site do? Here are the charts:
The movement on the Baseline Pages occurs on the 29th of August, and it moves in the opposite direction – the rankings drop. In addition, one of the terms on the Baseline Page makes a significant negative move on the 4th of September.
Travel New England Results
The Test Page and Baseline Page tested on this site were articles written by two different users about notable experiences they had traveling in New England. For the Test Page, we monitored the results for 10 different search phrases.
None of the monitored terms showed significant movement as you will see in a detailed look at the data:
For completeness, this is what we saw on the Baseline Pages:
No major movement was seen at any time during the monitoring of the results for the test on this site. The Baseline Pages actually show more movement around August 31st than the Test Page terms do at any time.
Our test really had 3 major goals – to see if Google Plus would drive discovery, indexing, and ranking, so I will evaluate that in three independent pieces as follows.
In my opinion, it is highly likely that Google Plus drove discovery of the content. Here is a sequence of accesses to one piece on the Test Pages extracted from the log file on one of the sites:
The line items that refer to “+https://developers.google.com/+/web/snippet/” are Google Plus Sharing events taking place to the content in question (reference: Google Developers site. And, of course, +http://www.google.com/bot.html is GoogleBot. Notice how it takes less than 6 minutes for GoogleBot to come to the page after the first share of the page, and there is a visit by GoogleBot to the site for each share. There are no other accesses of any kind to the content in this time period.
If you wanted to geek out on this in a serious way, you could dig into why it is that GoogleBot comes back to the page 3 times – it’s almost as if the placement in the crawling queue upon a +1 is automatic, without any effort to check if the page had already been visited recently. It appears that there is no de-duping of the crawl queue at this level. But, that is the topic for another piece of work that someone else should do!
What are the possibilities of corruption of this part of the test? In my opinion, they are nearly zero. The log file shows no other accesses to the content between the date of the initial Shares and the visit by GoogleBot. There is a microscopically tiny possibility that someone implemented a link during that 6 minutes, and Google crawled it right at the instant the link was implemented, and that our sources for detecting links have still not seen any evidence of it, but that possibility is low enough to be dismissed.
In addition, the Google Developers page for implementing +1 buttons states the following:
By using a Google+ button, Publishers give Google permission to utilize an automated software program, often called a “web crawler,” to retrieve and analyze websites associated with a Google+ button.
While this does not say they will index it, or rank it, it does say that they reserve the right to crawl it, and it was important enough to them that they explicitly stated that they might do so.
Initial shares of all six articles (3 Test Pages and 3 Baseline Pages) were completed on July 19th. All six articles initially appeared in the index on July 29th – 10 days later.
We first discovered that the pages were indexed via daily manual checking using a search query similar to this:
site:stonetemple.com “[insert long tail search phrase here]”
The “” were part of the search phrase, and each of the phrases tested were long tail queries that did not match up with other pages on the site being tested.
Once we saw that the pages were indexed we re-checked to make sure that the pages had not received any links, and no links were found to any of the 3 pages. This does not mean that the content received no links, only that the collective data available to us from Google Webmaster Tools, Majestic SEO, Open Site Explorer, and Ahrefs showed no links.
As mentioned above, my guess is that these tools cumulatively may reveal only about 30% to 50% of the links to a given website. So there could be links to the pages from other sites that we did not see. To help minimize this as a potential source of error, we also checked incoming referrers to see if the test pages were getting any traffic from other web sites, and none was found.
Other possible sources of corruption exist. At one point Matt Cutts did suggest to me that visits to the pages by browsers with SEO toolbars installed could be a potential problem. What those toolbars may do is unpredictable. During the period after the initial GoogleBot visits on the 19th of July, there were a handful of people who appeared to have visited the page with a browser. Some of these did occur prior to the confirmed indexing of the content.
At least one person dropped one of the pages into a Flipboard magazine, so we did see visits from a Flipboard crawler from time to time to some of the content. Based on the checking I have done so far, I do not see evidence that these Flipboard pages are indexed, so I don’t think that Google would have used those links to drive indexing, but I welcome corrective feedback from Google on this point.
I do not think that once the page was discovered that incremental browser accesses would be a significant factor in causing a page to index, regardless of the presence of an SEO toolbar. It is possible that such SEO toolbars could cause additional exposure of the page to Google via the Google+ API, but such additional accesses would only reinforce the notion that Google+ can help a page get indexed.
One other point of discussion is the long delay between the shares and initial indexing. Clearly, discovery of the page did not result in immediate indexing. Why did it take 10 days? We all assume that Google captures Google+ data in real-time, but this indexing delay looks more like crawling behavior. However, the fact that all 6 pieces of content indexed on the exact same date leads me in a different directions that suggests some level of “Sandbox” type behavior.
Another observation is that during a 10 day timeframe that on higher activity Google Plus accounts posts get pushed down the stack fairly quickly. Once a post has 10 other posts in front of it, does Google count that the same way? We don’t really know.
But, in summary, with the information available to me, I don’t see any other signals that would have caused the posts to be indexed.
Once we saw that a page was indexed, we were immediately able to find search queries for which the page ranked. However, this does not mean that the shares were driving ranking. As per the original Sergey Brin – Larry Page thesis, each page on the web has a small amount of innate PageRank. This PageRank by itself might cause a page to rank for certain types of long tail queries, even in the absence of any other signals.
In addition, a page with no links may also gain some benefit from the overall authority of the domain on which it resides. How this works exactly is not known outside of Google. However, it is clear enough that we need more data to be able to conclude that we would see ranking benefit from G+ shares. This is the reason we sent two additional waves of shares in the direction of the pages being tested.
It is also important to note that the risk of undetected corrupting links goes up over time! If you believe, as I do, that the available tools only give you a portion of the total link graph, the chances of malicious behavior by people who become aware of the test, or people innocently making stupid mistakes goes up. In addition, as stated before, general rankings churn, and larger scale algo changes can enter into the mix.
That said, to me, the most remarkable thing about the data in this part of the test is how unremarkable it is. For two of the sites, we see some things initially moving in the wrong direction, and then moving up in the rankings but only after a long delay, and the Baseline Pages moving on different days. For one of the sites we saw no material movement at all. Based on this data, this study did not show any material evidence of Google Plus Shares driving rankings movement for the Test Pages. Read more on my thoughts on this in the summary below.
Don’t Correlate Me Bro?
Most of you have seen the correlation studies. Both Moz and Searchmetrics have some excellent work in this area. The net is that social signals, such as Likes, Shares, and +1s have a very high correlation with higher search rankings. Surely that proves that these things drive search rankings, right? They do not, nor do they actually claim to. It is important to remember that many things can be behind a correlation. Here are a few examples of interesting correlations:
|This Item Strongly Correlates||With This Item||The Real Cause|
|Eating Ice Cream||Drowning Deaths||Both happen when it’s hot out|
|People Who Dislike Horseradish||Say They Do Not Have an Above Average Libido||No idea! (Source: correlated.org)|
|Use of Internet Explorer||US Murder Rate||Pure coincidence (source: Buzzfeed)|
|Shares, Likes, +1s||Higher Rankings||People socially share content because it is good, the same reason why they might link to it|
That conclusion is an important one. Producing great content will “cause” some strong correlations, because people like to share great content, be it by link, or via social site. Of course, the correlation is strong. To help speed you along to accepting this conclusion – Google can’t even see who Likes a particular piece of content, so clearly that is not causation in action.
So are correlation studies bad? Not at all. In fact, they offer tremendous value in helping us understand web behavior. These particular studies remind us of the critical link between great content, online reputation and trust, and getting results in both search engines and on social media sites. The fact is that these things are inextricably linked, not by one causing the other, but because there is a common set of behaviors that will cause both. You can summarize some of the desired behaviors as follows:
- Create great content
- Build relationships with influencers
- Build relationships with communities of likeminded people
- Strive to help others out
You can add your own to the above list if you like, but you get the idea.
What About the PageRank Flow in Google Plus?
Why would Google allow PageRank to flow in parts of Google Plus if it did not intend to use it? This study does not prove that they don’t use it, and in fact, I believe that they do use it, just not the same way that they have used traditional web links. A couple of areas where this PageRank could have an impact are:
- You may have seen Google+ content in your search results. Google Plus posts made by people with higher PageRank profiles may get more rapidly indexed than by Google Plus posts made by other profiles.
- A higher PageRank of a Google Plus profile may increase the likelihood that hose posts will show up in the personalized search results for people that follow that profile.
Mark Traphagen did an excellent write up of Google Plus driving personalization that you can read for more information on this topic.
But why would Google not use the PageRank in Google Plus Shares to drive ranking? I believe there are three answers to this question:
- It would be too easy to game – Google does not want people running around sharing content with SEO as their goal. I believe that Google Plus provides a tremendous amount of valuable information to Google about the preferences of people that can provide significant enhancements to their personalization algorithms. Put simply, bad SEO link building behavior would mess up the value of this data source.
- The data is probably too sparse – yes we all think it is this huge mountain of data that they can use, so they must be using it. Certainly adoption of Google Plus is growing rapidly. There are many people who do a LOT of posting on G+. But there are far more people who do not. Even in the SEO industry a significant number of people are not at all active on Google Plus.
- Google Plus Shares don’t require the same level of effort as implementing a link on a web site, and they don’t involve as much commitment – on an active profile that Share has scrolled out of site within a few days’ time.
As human beings we are all inherently flawed. In the search industry, we have a strong tendency to assume that our belief that Google could leverage a signal as a ranking factor means that they do leverage that signal. The problem is that speculating that something is a great ranking signal is different than figuring out how to effectively use it. Whether or not a source of data makes a good ranking signal is ultimately determined by a combination of highly sophisticated mathematics, and lots and lots of real world testing.
The reality is often quite different than the speculation. It behooves us, myself included, to remember that. The correlation studies actually give us fantastic information. Information that does not suggest we pursue manipulative behavior, but in fact suggests that we pursue world-class content marketing as a core business strategy.
Here is how I would sum it up:
- Google+ shares do drive discovery.
- Google+ shares probably drive indexing as well, with some possibility of error in this stage of testing (links we don’t know about).
- We saw no evidence of Google+ shares driving ranking.
It is also interesting that Google Plus shares do not appear to exhibit a “Query Deserves Freshness” (QDF) behavior. If a social share was treated as a potential indicator of news as you might expect a very fast indexing response with initially higher ranking that declines over time would be what you would see. Yet, our Test Pages took 10 days to appear in the index. This is clearly not breaking news level behavior. If Google+ is used at all in this fashion, it would probably require a much bigger burst of activity of social shares for QDF treatment to happen, and it may require the support of links elsewhere on the web.
Our test was designed to eliminate distracting signals, and hence we did something that in its own way was a bit unnatural. We attempted to minimize and in fact eliminate re-shares. Our text description of the articles was not normal – in fact it was a warning to others to not look at the content. We needed to do that to try and get as pure measurement. In addition, every share of the content used the exact same descriptions, which is also somewhat unnatural.
These constraints could possibly impact the validity of the findings. In addition, other behavior patterns, such as organic re-shares, comments, and other activity could help improve the results. But, ordinary web links do not depend on comments and re-shares to carry weight.
My opinion is that if there is any impact from links in Google Plus shares, that these links do not get treated the same way a regular link does even though we can see that links in Google Plus pass PageRank. For purposes of this discussion, and for the study, we are not talking about “personalized” results. That is another matter, and Google Plus Shares by people you know do impact the personalization of your results.
Google can filter link signals, including those in Google+, at many levels. Why would they allow shared links to pass PageRank if they did not want to use that PageRank in some manner? I believe that they want to use it to help them identify more authoritative profiles. Many people believe that more authoritative Google Plus profiles will have their posts indexed by Google more quickly.
Many of you will also have seen Google Plus posts in your web search results, and these generally come from people whom you are following. It may be that you are more likely to see such results more often for people who have more authoritative profiles.
As for treating a Google+ Shares the same way they treat a traditional web link, remember my theory that a Google+ share does not involve either the same effort or the same commitment as a traditional web link which is harder to implement and is more permanent.
I acknowledge that there are many ways to point at the holes in this study, and I have offered my interpretation of it.
There are smart people I know who have seen this data who disagree with my interpretation. Are you one of them? Let me know via your comments and feedback.
If you want to see a panel of industry experts debate this issue, put aside some time on Thursday September 19th at 4 PM ET for a Hangout on Air Event (a live broadcast). I am honored to have Mark Traphagen of Virante, Inc, Joshua Berg of RealSMO, Pete Myers from Moz, and Marcus Tober of Searchmetrics joining me. In this session we are going to have representatives from the companies that have published major related studies, all in one place. Come watch the great debate! And while you have a chance, please follow me on Google+!
The people who performed shares of our content played a key role. The following chart shows the PageRank distribution of the people who participated in the first two waves of shares:
The hand-picked participants who executed the shares in the first 2 waves of this study were:
- From Stone Temple Consulting: John Biundo, Kathy Brown, Carter Elkin-Paris, Eric Enge, Kristina Ferrari-Barrett, Art Gould, Jim Parent, Tom Perella, Rob Pirozzi, Andrea Shoemaker, Charley Spektor, Clark Taylor
- From Extra Space Storage: Scott Jensen, Ran Richey, Garret Stembridge
- From SEO Copywriting: Laura Crest, Heather Lloyd-Martin
- Debra Mastaler
- Larry Kim
- Randy Krum
- Dana Lookadoo
- Dan Shure
- Angie Schottmuller
- Melissa Fach
- Rob Garner
- Greg Jarboe
- Suzanne MacDonald
- Ethan Hays
- Dave Rohrer
- Casie Gillette
- James Zolman
- Rudy de la Garza
- Chase Johnson
- Doug Jones
- Dave Middleton
- Erik Bleckner
- Kenneth Wu
The contributions of these people can’t be minimized as they all put somewhat unusual posts on their Google Plus profiles, and their help was critical to this test!
There are others who deserve recognition for their assistance in earlier phases of this study. In my initial studies, I had many engaging discussions with Marcus Tober of Searchmetrics, and these helped me conceive this latest edition. In addition, two other sites participated in earlier versions of the study: Extra Space Storage and Amsterdam Printing.