New Correlation Data Suggests Enhanced Importance of Site Wide SEO
SEO’s are huge believers in signals relating to Google’s overall perception of a website.
It makes a lot of sense, if Google can understand that Wikipedia’s articles are typically of a higher standard than eHow’s then they can make better decisions on the quality and relevance of web pages on these domains.
By using this data search engines can also make quick decisions regarding new content published by these sites. This fresh content wouldn’t have gained the links and other time related ranking factors as an established article, but may still be relevant to the user. This may be particularly true with news or “query deserves freshness” results.
In addition to gathering data that might indicate the quality of content published on the site, it is thought that Google gathers data on what geographical location, type of user, industry, etc the site targets. Much of this data is difficult or in many cases impossible to gather without being Google, for example a site’s average SERP CTR or bounce rate.
Overall it would be fair to say that Google utilises different models to gather and analyse domain level data pointing to the authority of a website as a whole.
The potential value of domain level factors to the webmaster is immense. If you make a single site-wide improvement, it may impact the ranking of several thousand pages on the site. Domain level SEO offers easy to implement strategies that can hold a much higher ROI than page by page factors.
What data is collected by Google and how much influence it has in the overall ranking of a web page has been theorised and debated for many a year.
Overall what we will see in this article is that domain authority signals are relatively highly correlated, and that for the most part, many of the industry’s theories surrounding these factors have largely been correct, which is refreshing in light of some stunning on page factors’ correlation data.
Over the past 2 months I have gathered data on 31 domain authority signals, for the top 100 results in Google, for 12,573 keywords.
I have analysed this data using Spearman’s Rank Correlation Co-efficient, looking for relationships between individual factors and ranking in Google.
I have also studied several other areas of SEO. I have published some of these results (including domain name related factors and on page factors) although some results haven’t been made public yet and will be published over the coming weeks.
This is all part of a greater project to bring more science to SEO and make it a truly data driven industry.
There are inherent issues with correlations and they don’t prove anything per se, but as I have covered these issues before I won’t rehash old information, what I will suggest is – that if this is your first time on the site, please read this and this.
I would like to thank Link Research Tools for generously providing me with free access to their highly useful API from which all the below correlations are derived.
Please note: while domain level link metrics could be included in this post I have decided to deal with all link related factors in a separate post which will be published in the near future.
If you wish to see the keyword by keyword correlations that resulted in the mean correlations reported above, feel free to download this spreadsheet with all the relevant data.
Here’s some handy definitions in case you aren’t sure what some of the above factors are;
- Domain age, is the time since the domain was first registered.
- PageSpeed rating, is the Google measured score out of 100 on how well a page is performing with regards to several indicators of how quickly a page loads. The higher the score the faster the performance.
- Days to domain expiry, is the time until the domain expires or needs to be re-registered.
- Alexa and Compete rank, are both independent measures of how much traffic a site gets. The lower the score, the more traffic the site is supposedly getting.
- Basic, intermediate and advanced reading levels, are Google measures, of what reading standard a given page is at.
Google are always trying to figure out how trustworthy a site and its content is. Many theories have emerged as to what factors likely impact the trustworthiness of a whole site.
Domain age, is a classic and while I personally am sceptical about its use as a direct ranking factor, it does seem to have a strong relationship to ranking well in Google, with a near 0.2 correlation, which is highly significant.
How much of this can be written off due to the increased time available to established sites to build links and content and of course just the pure common sense – that a site running for a significant length of time will only have survived by providing for a user’s needs, is hard to determine. Domain age is a factor that’s impossible to manipulate, only worthy for consideration in the procurement of a new web property.
But by saying that its impossible to manipulate, I am then strengthening the case for Google’s use of the factor. So the truth is, its difficult to say whether its a factor or not. It does correlate well, so I would suggest that if you come across a situation where domain age is being considered give it some but not substantial weight in whatever decision you are making.
Homepage PageRank, and PageRank in general is one of the most hotly debated topics on the SEO circuit. We all know of the PageRank Toolbar’s problems and unrepresentative view of the real PageRank Google calculates and uses within their algorithm.
But at the same time the social data Google may pull from APIs may be more complete than the data I have access to and the internal Google link graph is even larger than the gigantic SEOMoz link graph yet we treat these representations of what Google sees as perfectly good.
My point is not that social data and link counts should be disregarded but that perhaps some, if not all of our suspicion at the value of PageRank as a metric is misplaced.
The importance of PageRank is backed up in its mighty performance in the correlation study, the highest correlated domain level authority signal at .244.
This and data on domain level link metrics which I will be publishing in the coming weeks has solidified my view that Google certainly weights and utilises domain link popularity in the ranking of content on a site.
Thus it is reasonable to recommend the already popular theory of building links to the homepage and domain as a whole.
Whether homepage link building warrants special treatment, is dubious and I would in general advise a strategy of building links to a domain as a whole, linking to the homepage only when it feels right and not because of any particular strategy.
Days to domain expiry, is an intriguing and interesting idea, that how long the webmaster registers a domain into the future is an indicator of the webmaster’s intent at creating a long-term user resource.
The marginal correlation at .089 probably suggests its minimal to lack of weight within the algorithm. In saying that, it is an easy and inexpensive factor to manipulate and even a marginal boost in search engine performance would be worth the puny risk.
There have been theories in the past which suggest its importance to newly registered sites, which again complies with basic common sense.
I can recommend registering your domain for 3+ years as a simple, one time, SEO strategy that may or may not impact ranking but certainly has no significant downside.
Alexa and Compete rank, I doubt whether the amount of traffic a site gets is a ranking factor. But its significant correlation may be indicative of a deeper positive correlation from Google towards larger sites.
Whether this is due to ranking factors in favour of larger sites, these sites performing better in non-discriminative factors or something else is worth pondering.
What I will say is that in general sites are large because they are useful to users and its a search engine’s job to try to find sites that are helpful and useful for users.
The same logic should track for the number of pages in Google’s index of a site,while this is highly unlikely to be a direct ranking factor it is perhaps an indicator of other factors actually implemented in the algorithm.
If the data is taken at face value, the then it would appear somewhat surprising that larger sites are performing worse, although the reliability of Google’s provision of this data appears to have impacted results.
I would like to test this factor and other similar indicators further before drawing a definite conclusion.
The near random correlations for the geographic location of host servers is not surprising and in fact not very interesting at all.
I tested it purely to check whether there was any significant correlation but I didn’t expect there to be as I conducted my searches from which these correlations are drawn, on Google.com.
The theory of geographic targeting is largely protested to be in use in non USA countries. In the future I hope to conduct studies on non-US versions of Google and to recheck this factor, but for the meantime the data is inconclusive and the current theories within the industry on server location should be followed.
While the data is somewhat flawed in that Link Research Tools didn’t return data on a significant number of domains for this factor and the fact that homepage reading levels may not be the same as page level reading levels, the idea and the testing of such a factor is very interesting.
It is something that I believe Google to be using as a factor in the personalisation of search results. For example if they have figure out you are an 8 year old, then maybe you don’t want Shakespeare or research papers returned and you want content written in the language that you as an eight year old use. Not to mention the fact that not many eight year olds are searching for “Macbeth” or “quantum physics”.
A broad correlation study is not conducive to making a recommendation on what language you as a webmaster should use, but it is an interesting topic and something that you should consider when you are writing. Who are your audience and are you writing in their language?
This was a rather cheeky test, and was never likely to reveal a ranking factor, more likely to represent the success achieved by sites registered through the above registrars.
I wasn’t surprised to see GoDaddy with the worst correlation as its add-on products and the clientèle don’t quite indicate quality or high editorial standards, not that many registrars do.
Once you understand and are disciplined with your implementation of SEO and general website ownership standards and strategies then the registrar you choose shouldn’t impact your ranking. But if you are new to the game or likely lead astray, then a registrar and host that promotes these standards may prove a more fruitful path.
The PageSpeed ranking is important, it suggests that if a site follows good principals with regard to the loading of content it will be rewarded with higher rankings. Tests on a page by page basis would be even more conclusive, but this reasonably high correlation for homepage level PageSpeed vindicates some of the excitement generated by Google announcing it used site loading speed in rankings.
The incredibly large correlation for both total and nofollowed external links on the homepage of a site is puzzling to say the least, although the internal data seems more explainable.
While I have some ideas on what may be causing such large correlations, primarily surrounding the type of site that would link to another website from its homepage, I have no real explanation. If you have an idea, guess or have experienced this in the field then please leave a comment below the post.
Wow! I saved the best till last.
Some super interesting social media correlations, with the general theme being that social media is really important.
The fact that Facebook and Google + links to the homepage of a site are the lowest correlated of the bunch is rather strange. The Facebook data could be explained by a possible block on Google accessing FB data. But Google Plus?
Perhaps this indicates that homepage social media shares are not used as a ranking factor but that the other social networks have such a strong user base, that recommend quality content that these social media shares actually represent a measure of the quality of the site as a whole, hence explaining the high correlation.
Also the fact that Google + has a relatively small user base, may mean that its disruptive influence on other factors such as links arising from the additional traffic sent to the site by high levels of sharing of the site on Google + is minimised.
Another explanation is that Google is using Digg, Reddit and StumbleUpon data more than we know about and we should focus more effort on these social networks and Twitter.
But again I’m not certain what these correlations mean, if you have any ideas on these correlations or you have seen Reddit, Digg or StumbleUpon marketing result in increases rankings for your site then please leave a comment below.
Further study of these factors on a page level basis would tell us more about these speculations.
The correlations for domain level authority signals are comparatively higher than those seen by on page factors.
Domain level factors are ideal starting points for an SEO and often provide a one time, easy change that could, based on the above results, have a substantial impact on ranking.
Even if you disregard the individual factors above as ranking signals, it would still be more than fair to conclude that domain level SEO is very powerful and you should be constantly trying to improve the domain, through site-wide enhancements.
Some of the results, in particular the social and homepage links are somewhat puzzling and I am looking forward to hearing what people think are the likely causes of such strong correlations.
I will be publishing the link related domain authority factors in the coming weeks, so stay tuned.