The Future of TheOpenAlgorithm

After an incredible few days at SearchLove I realised I kept getting the same question from SEOs who have been following TOA, “are you still working on that correlation thing?”

Shit, YES, I did take a break to work on some other stuff but I’ve been working my balls off learning SQL, taking courses on writing better code, writing scripts, reading research papers, designing experiments, plus I’ve been working on other cool projects.

Woops, my bad, I guess I broke the cardinal rule of blogging, relationships and just about everything else, “keep your followers in the loop.”

I’ve known where TOA is going next, for a few months now, but over the last couple of weeks I’ve ironed out the details and am ready to start gathering a a massive dataset for the next iteration of the project.

In this post I’m hoping to answer that question and essentially I’m going to out myself to the industry so if I don’t reach the targets set below you guys can hold me accountable.

What’s next

As everyone knows the ultimate goal for this project is to research more causal relationships between factors and ranking in Google and to create a model search engine algorithm.

I’ve done a lot of reading and emailing in the last couple of weeks and it seems like weighted regression using the pointwise learning to rank approach is my best shot at creating a successful model.

I’m sure this all sounds gobbledygook to most people reading this post, because in truth some of this is still gobbledygook to me.

I’ve never taken a stats course, only run a couple of basic multiple linear regressions, don’t really know anything about machine learning and I’ve just finished differential calculus in my last year in high school but when you apply yourself and read these things a couple of times over it starts to sink in.

Right now I know enough to understand what data needs to be gathered and how it needs to be analysed.

But once I have the dataset I’m going to try and find someone much better at this kind of analysis than me and run the regression together.

Because regressions have almost no computational cost we can try several methods other than weighted regression that might work.

At this stage of the process I’ll become less like Steven Levitt and more like Stephen Dubner, if you don’t get this you haven’t read the Freakonomics books (why? there great).

P.S. if anyone knows Levitt feel free to drop him an email and let him know I’ve got a big dataset for him :)

There’s a 97% chance its going to fail

Ok, I just made that number up, but the chances of me getting a model that correlates above .65 (that’s my goal) to the Google algorithm is extremely low.

First, I can’t test things like user engagement, CTR, etc.

Second, their super smart PhD holding engineers have definitely come up with more advanced topic modelling models than LDA.

Third, a couple of much smarter people in the SEO industry have tried similar methods before and gotten models not worth publishing.

Fourth, I’ve talked to some really, really smart guys in the last couple of days, and while all of them were supportive, none of them actually thought I would pull it off.

Fifth, weighted regression and pretty much any other type of regression is going to have its pitfalls.

Why bother

With all the odds stacked against me you are definitely wondering why I would bother spending my free time for the next few months running a project likely to fail.

  • Being the nerd that I am, I actually enjoy this stuff.

 

  • I set myself the goal and promised the SEO industry I would come up with a model search engine algorithm, so that’s what I’m going to do.

 

  • If the model fails, it still succeeds in that we can pretty much say that SEOs know almost nothing about the Google algorithm, so they should just be doing some RCS. Plus even if the model fails in reaching its correlation target I should still be able to answer some key questions, like how much social shares actually matter, etc.

 

  • I will still be running correlations which after Penguin and Panda might prove quite interesting, plus I’ll have the correlation data on the new factors I’m going to test (social!!).

 

  • That 3% chance of success (or whatever the number actually is) is like gold dust, if the model is successful there are unlimited ridiculously cool things to do with it.

What’s going to happen

I’m going to try not to get to technical or bogged down in the details here:

  1. I’m going to finish rewriting my code from the correlation study (when I first wrote the code I had only been programming for 3 months, so you can imagine how cringe-worthy it is when I look back at it).
  2. I’m (with your help) going to figure out what new factors I should test in this iteration that I didn’t test with the correlations, think social, more advanced topic modelling, anchor text, etc.
  3. I’m going to write and test the code to gather the data for these factors.
  4. I’m going to figure out what keywords to gather data for.
  5. I’m going to split these keywords up by industry and by likely user intent (navigational, informational, commercial and transactional), unfortunately I will have to classify intent by hand (that’ll be a fun weekend).
  6. I’m going to go back to our incredibly, amazing data providers and ask for their support for the project one last time.
  7. I’ll run the scripts and gather the data. Not sure whether I’m going to include Bing here, I’d be happy to do it, if SEOs would find it useful (comment below) and I can get the data required.
  8. I’ll run some fun tests and publish the results. I think it would be interesting to know in what industries universal search is most prevalent, which industries use social the most, how well does the Bing algorithm correlate to Google’s, what domains show up most in search results, what individual URLs show up most in the results, etc.
  9. I’ll run and publish the correlations in the same way I did last time.
  10. I’ll create and publish some useful algorithms that might come in handy for SEOs or for future research e.g. can I create a model that accurately identifies query intent using the data at my disposal and my own evaluations of this intent.
  11. I’m going to find some much smarter people than myself to help me create the model.
  12. I’ll publish the model and the normalised coefficients (which will be most useful in determining the importance of each factor).

So that’s it really, I will do my best to blog about any major steps forward in the project, and I’ll definitely be Tweeting more often (probably the best place to follow exactly where I am with the project).

Finally Killing Off Keyword Density

As much as any SEO worth their salt knows that keyword density is not a ranking factor, there are some out there that still believe it is a signal in the Google algorithm or somehow is related to ranking well in Google.

Often this myth is perpetrated by the untrained eye, or sold by the snake-oil salesmen looking to oversimplify the Googe algorithm in the hopes of screwing some SEO noobs out of a few bucks.

This short article’s goal is not to shock you with this amazing new revelation but to provide a single scientifically backed piece of proof that keyword density is to be ignored. This article is a handy link for all the SEO consultants who have clients with notions about keyword density and its importance.

Anecdotal evidence

Let’s think about keyword density and Google’s goals logically.

Google’s goal as a search engine is to provide relevant, useful results to users. That’s why users love Google and keep coming back for more and that’s the only reason Google surged to dominance in the search engine field, not marketing budgets, not clever tricks but providing the best results for search queries.

As a user which would you prefer; a page that consistently and methodically mentions the same keyword or a page that uses a similar word of the same meaning to make the writing sound more natural and flow? The second page right?

So why would Google reward a page that is less useful to users than one that is more useful when that runs against their core goal as a search engine?

They wouldn’t.

Scientific evidence

I recently completed a study of over 12,000 keywords. Comparing data on various search engine factors to the ranking of the top 100 results for each keyword in Google. In total I looked at 1.2 million web pages. I used Spearman’s Rank Correlation Coefficient to compare the data on the various factors I tested with each web page’s ranking in Google.

For example I figured out the keyword density for each of the 1.2 million web pages and compared that with each of those page’s ranking.

Spearman (the statistical measure I used) gives you a number between -1 and 1 representing the nature and strength of the relationship between keyword density and ranking well in Google.

A minus number means there is a negative relationship, i.e. when keyword density decreases, ranking in Google increases.

As that number gets closer to either -1 or 1 the strength of the relationship increases. So a number near zero means there is no relationship between keyword density and ranking well in Google.

As it turns out my study showed that the correlation between keyword density and ranking well in Google is -0.028126693, that means there is pretty much no relationship between keyword density and ranking well in Google and if there is a small relationship then it is a negative one.

But what would a number be without a chart? Let’s compare keyword density’s correlation with some other factors I tested:

Chart: Keyword Density ComparedDescription: Tags: Author:

The final blow

There is no better form of proof than having logic and science confirmed by mother Google. Here’s what Matt Cutts, Google’s #1 webmaster spokesman has to say on the matter – “Keyword Density: Not really a factor. Yes the keyword should be present but density is not important. Include the keyword but make writing sound natural.”

If logic, science and Google all say keyword density doesn’t matter, then it doesn’t matter, so don’t believe anybody who tells you it does and stop hounding your SEO guy/gal about it.

Why Google’s Algorithm Doesn’t Care What You Write

Most SEOs understand that on page factors have and will continue to decline in importance within the Google algorithm and SEO.

We all know that Google takes no notice of the meta keywords tag, and little notice of other tags, markups and HTML structures which has been backed up with my data.

But “in content” factors i.e. ranking signals related to the actual text/content on a page, would appear to be a separate matter.

Spamming the meta description doesn’t hurt a user’s experience but spamming the actual content the user sees is detrimental to the user and therefore not worth doing, right? Surely damaging the user’s experience isn’t worth those few extra visitors from Google, seen as they will probably convert less due to the comparatively less helpful content?

It sounds good and makes sense, and as a result we in the SEO community have come to the conclusion that while Google might ignore those other on page factors, they probably have some really smart methods to figure out what a page is about and the quality of the content on that page.

I decided to look at 5 really basic factors that you would think may feature in some element of the Google algorithm or would be closely related to a ranking factor.

While I understand there are better and more advanced methods (LDA, TF*IDF) for comparing the content on a page to a given keyword or judging the quality of a given piece of content I tested these really basic factors to judge Google’s likely weighting of “in content” factors.

I may test more advanced topic modelling algorithms/factors in the future but for now I have stuck to some old information retrieval reliables.

You are probably already aware of the general method for my correlation studies but if this is your first time here please read this and this.

Special thank you to Mike Tung of Diffbot for providing me with free access to their article API which is undoubtedly the best text extraction service/algorithm out there. And as if the solidify that point congrats to the team on their recent 2 million dollar funding round.

Data

 

Chart: In Content Factors Correlation DataDescription: Tags: Author:

This data is based on a dataset of over 1.2 million web pages from over 12,000 unique keywords and the correlations are derived from Spearman’s Rank Correlation Coefficient.

Analysis

Images and Videos

Having images and videos on a page is generally accepted to be good for the user. What I was interested in, was seeing whether this translated into increased Google ranking or not, and apparently it doesn’t.

Assuming Google likes pages with images and videos (large but reasonable assumption), this is quite a good test of how low-level Google are willing to go to promote pages that are in line with what they want to see i.e. are Google willing to reward these pages in the algorithm or do they prefer the great PageRank algorithm to help them identify what users like and want.

Page Contains Bad Words

This is another fascinating test, I checked whether a page contains any bad word from this list.

I thought that using these naughty words would surely relegate you from the search results or at least lower your ranking, but the data doesn’t support that.

Unfortunately there is a cautionary note, Google likely run a somewhat more advanced algorithm over content, instead of checking for just the presence of these bad words they probably look at their likely intent and as a result either ban a page or give it no penalty.

A news article quoting a foul mouthed sports star shouldn’t be banned from the search results because of its harmless and informative intent.

Because of this the pages that weren’t excluded from the search results and therefore likely received no penalty were the only ones to show up in my dataset.

As a result it would be unfair to draw conclusions regarding Google’s implementation of bans/penalties towards pages using these bad words.

Is the Keyword Even in the Content?

Most of us would think that this test is a sure thing. Forget keyword density, but surely having the keyword in the content of a page is absolutely vital to that page ranking well for that keyword but again the data says otherwise.

How can this be? The most basic and obvious steps for checking whether a page is about a keyword is to check whether the page contains that keyword, how else would Google narrow down their massive index into something more manageable?

Well there is anchor text, meta description, title tags and many other areas that Google may look at to check whether a page is about a keyword or not.

But what this astoundingly low correlation suggests is not only that Google likely doesn’t implement such a factor (when ranking pages) but also that Google probably isn’t using other super-advanced topic modelling algorithms, as most of these algorithms are based on the assumption that the keyword is in the content and all of them are based on the assumption that there is textual content.

Distance to 1st Keyword Match

I was a little more sceptical about this factor correlating well and rightly so. This old-school factor might have been in use in the days of Alta-Vista but most of us would agree its not so likely to be around any more.

Summary

While other topic modelling algorithms might correlate higher than the above factors most of them are based on the simple assumption that a page contains the keyword you are trying to model for and that there is textual content, which are dangerous assumptions for Google to make.

Nobody can make a blanket statement like, “Google don’t analyse what you write and don’t care what’s in the content of a page” but the data does point us in that direction.

If you are to accept the theory that Google don’t take such a low-level and specific view of pages or at least don’t weight such a view very highly then it is easy to come up with reasonable justification for that theory.

For example if Google takes such a low-level view then how does it understand infographics or video blogs? How will such an algorithm scale with the web as it evolves further into a multi-media and not just a textual internet?

I don’t believe the data is in anyway conclusive and I do believe that other “clever” topic modelling algorithms may correlate well with ranking but whether or not that means Google implements such factors within their algorithm is another debate.

What I will say is that I believe Google most likely take a much higher-level view of pages than we think, using links, social media, PageRank and other more scalable factors to determine the relevance and quality of a web page rather than looking at on page or in content factors.

As a result I would recommend that all webmasters create content, title tags and web pages with the user and not the search engine in mind and optimize for other far more scalable factors like link building, social media, page loading speed, etc.

Links – Huge Correlation Between Link Building and Google Ranking

Links have been an integral part of SEO since Google joined the scene.

But recently link building’s popularity has taken a bit of a hit, with many believing that Google have reduced its weighting of PageRank in the algorithm. The emergence social signals and other factors indicating user satisfaction have according to many within the industry eclipsed (or will in the future will eclipse) links as the primary ranking factor.

But this speculation hasn’t been mirrored in my data. Over the course of this post we will examine over 40 link related factors, all of which correlate very well, and a number of which are the most heavily weighted factors in my study.

The main finding from this data, is how well links correlate to ranking in Google. I have tested over 150 potential ranking factors in 6 categories and without a doubt, links stand head and shoulders above any other section of factors.

Link building is a bit of an ugly duckling within the industry, everybody knows its importance, but very few are effective in its practice.

Unlike changing title tags, building quality links requires skill, creativity and determination. Its not easy work, its not the low hanging fruit, but based on the data below, it appears to be the most rewarding.

While I won’t discuss link building strategies in this post, I would like to mention that I feel many strategies are extremely inefficient and unproductive and a lot of the theory behind this area of SEO is fundamentally flawed. I will be publishing some more of these ideas, with anecdotal evidence in the future.

The project

The below data is based on a dataset of the top 100 results in Google, for 12,573 keywords.

I have analysed this data using Spearman’s Rank Correlation Co-efficient, looking for relationships between individual factors and ranking in Google.

I have already published some of the results from the study including domain name related factors, on page factors and domain authority signals.

This is all part of a greater project to bring more science to SEO and make it a truly data driven industry.

There are inherent issues with correlations and they don’t prove anything per se, but as I have covered these issues before I won’t rehash old information, what I will suggest is – that if this is your first time on the site, please read this and this.

I would like to thank SEOMoz for providing incredible access to both their amazing Mozscape API, from which the below results are derived and their expertise and advice. In particular I’d like to thank Rand Fishkin, Dr. Matt Peters and the API support team for all their help.

Data

This Excel Spreadsheet provides the keyword by keyword correlation figures from which the above mean correlations are derived.

Breakdown

Google’s algorithm doesn’t just look at how many links there are to a page, it looks at quality signals, website authority indicators and tries to protect against manipulation.

Basically, just building links isn’t good enough, there are certain kinds of links that are better than others.

Below I have covered the types and areas of link building that are thought to be utilised within the algorithm.

General Links

The correlations for general links, as compared to specific counts such as # of IPs/Cblocks/Domains/Subdomains are significantly lower.

This supports the fact that Google looks at several factors and classifiers when considering the quality of the source of a link.

While this certainly isn’t an interesting finding, it is important from the point of view, that such a conclusion supports a known fact and therefore increases the likelyhood that the data gathered and the resulting correlations are correct and do represent what’s actually happening within the Google algorithm.

I investigate which particular classifiers and types of links would be best in a link profile, below.

Cblocks and IPs

 

Both the number of unique Cblocks and IPs linking to a site are thought to indicate the diversity of a link profile.

Google want to see a variety of sites “voting” for a website’s content. The weighting of each additional link from the same site is reduced relative to a link from a new source.

Knowing this many webmasters began to build “lens sites”, that’s sole goal was to link to the mother site.

It is believed, that to counter this Google implemented an algorithm that could figure out if a link was coming from the same source (i.e. the same webmaster) as the site that was being linked to.

There are a number of factors that Google likely use in such an algorithm, but it would make sense that Google treat links coming from the same IP or Cblock as more likely to be coming from the same webmaster, and thus marginally less trustworthy.

While the data doesn’t prove or disprove this theory, it does show a higher level of correlation for the # of Cblocks/IPs linking than for a general count of the # of links to a page/site/subdomain. Although the difference is small it could support the above theory.

With this data and using some common sense, I would recommend following the current industry practice of building a diversified link profile.

Domains and Subdomains

 

Again the above data further enhances the argument for a diversified link profile.

It also shows a potentially interesting albeit small difference between the # of unique domains vs. subdomains linking. With the # of unique domains coming out on top.

While the difference is too small to make a concrete conclusion, such data would certainly point us in the direction of building links from a diversified set of domains, and treating subdomains on the same root domain as related to each other and therefore each additional link from a separate subdomain on the same root domain as slightly less valuable than the link before it.

Links to the page

The above data conforms to the seemingly obvious conclusion that if you want to get a page to rank well, then building links directly to that page is the best way to get that to happen.

While most SEO’s will find that stupidly basic, I have seen some SEO’s suggesting that domain level links would be more powerful or a better use of time. The data just doesn’t support that strategy if you are trying to increase the ranking of a specific page.

Links to the page’s domain vs. subdomain

 

Interestingly the strong performance of domains vs. subdomains as the source of a link, is not matched in the location/target of a link. If we are to believe that such marginal differences are important, then the data may suggest (as a number of industry watchers have stated) that Google treat subdomains as separate to the root domain in looking at the host’s (which could be the domain or subdomain) authority.

This seems strange, and I may be reading too much into the data but if the above statement was the case, then Google’s treatment of subdomains as separate sources of content would not be matched by their treatment of subdomains on the same root domain as essentially the same source of links.

If such a conclusion were to be made, then it would be most likely to explained away by the likelihood that Google doesn’t just look at whether its a subdomain or not, and it likely uses much more advanced algorithms to figure out whether a subdomain should be considered part of the same domain.

Thus Google would understand that blogname.wordpress.com is not related to wordpress.com but blog.exampledomain.com is related to exampledomain.com.

Nofollow vs. Followed

 

Here is a classic case of inter-related factors impacting on the correlations of each other, we know that nofollowed links carry no SEO benefit directly, although they may result in some other factors being impacted e.g. someone clicks on a nofollow link and then shares the page on Twitter.

A page with a lot of nofollow links pointing to it, is far more likely to have a lot of followed links pointing to it.

This is because there are standard ratios, different types of links hold within the link profile. And any deliberate alteration by a webmaster is only likely to result in a small shift in those ratios.

There are many inter-factor relationships going on in the above data. Nofollow links may indeed carry no search engine benefit, but could still show the strong correlations, as above.

Marginal differences in the correlations shown by different categories of links, e.g. followed vs. nofollowed may be more important than it appears as face value.

This is why I have read a lot into such small differences.

SEOMoz Metrics

SEOMoz have created a number of algorithms that are meant to mimic Google’s link related algorithms. I don’t know the exact make-up of these algorithms, but I thought it would be interesting the test the performance of these algorithms, to check whether using these metrics as a measure of the success of your link building is a good idea.

If you are interested, here’s the general make-up of these algorithms: MozTrust, MozRank, Domain Authority, Page Authority.
 

Wow! Moz really seem to have done a great job developing their algorithms. In keeping with the above data on the value of page level metrics, Page Authority comes out at an astounding .36 correlation, which is massive, making it the highest correlated factor out of the 150+ I have tested.

Comments

The link related data is in my opinion is on par with the on page factors as being the most interesting and important to the SEO industry. Both lead to the same conclusion, on page factors are by far less important than off page factors.

Links aren’t just about SEO

Building links isn’t just an exercise in SEO, its also an exercise in marketing. Links can drive a lot of direct traffic from people clicking on them and also can build your brand name.

Its important to factor the direct traffic value of links into your link building decisions. This is particularly evident where a second, third or fourth link from the same site, may seem like a step down in SEO importance but may still provide high value direct traffic.

Links aren’t dead!

If I read another article proclaiming PageRank or link building is dead, I’ll scream. Its very simple, the scientific data simply does not support the speculative accusations of the reduced value of PageRank or link building.

In fact in many cases their level of correlation has increased, not decreased since Moz conducted their 2011 study.

Link related factors are far and above the highest correlated set of factors.

While we in the SEO industry recognise the importance of links, I don’t think we covert this mental idea, into action. I don’t believe that SEOs spend the right proportion of their time on link building. And SEO blogs, conferences and experts certainly don’t talk enough about how to do great link building.

There definitely isn’t enough data available on what the best link building strategies are, with the majority of link related blog posts stemming from speculation, not data driven proof, something I hope to address scientifically through this project.

I welcome presentations like this from Mike King, that back up strategies with solid data.

Bottom line – spend a whole lot more time link building.

Domain Authority

New Correlation Data Suggests Enhanced Importance of Site Wide SEO

 

SEO’s are huge believers in signals relating to Google’s overall perception of a website.

It makes a lot of sense, if Google can understand that Wikipedia’s articles are typically of a higher standard than eHow’s then they can make better decisions on the quality and relevance of web pages on these domains.

By using this data search engines can also make quick decisions regarding new content published by these sites. This fresh content wouldn’t have gained the links and other time related ranking factors as an established article, but may still be relevant to the user. This may be particularly true with news or “query deserves freshness” results.

In addition to gathering data that might indicate the quality of content published on the site, it is thought that Google gathers data on what geographical location, type of user, industry, etc the site targets. Much of this data is difficult or in many cases impossible to gather without being Google, for example a site’s average SERP CTR or bounce rate.

Overall it would be fair to say that Google utilises different models to gather and analyse domain level data pointing to the authority of a website as a whole.

The potential value of domain level factors to the webmaster is immense. If you make a single site-wide improvement, it may impact the ranking of several thousand pages on the site. Domain level SEO offers easy to implement strategies that can hold a much higher ROI than page by page factors.

What data is collected by Google and how much influence it has in the overall ranking of a web page has been theorised and debated for many a year.

Overall what we will see in this article is that domain authority signals are relatively highly correlated, and that for the most part, many of the industry’s theories surrounding these factors have largely been correct, which is refreshing in light of some stunning on page factors’ correlation data.

The study

Over the past 2 months I have gathered data on 31 domain authority signals, for the top 100 results in Google, for 12,573 keywords.

I have analysed this data using Spearman’s Rank Correlation Co-efficient, looking for relationships between individual factors and ranking in Google.

I have also studied several other areas of SEO. I have published some of these results (including domain name related factors and on page factors) although some results haven’t been made public yet and will be published over the coming weeks.

This is all part of a greater project to bring more science to SEO and make it a truly data driven industry.

There are inherent issues with correlations and they don’t prove anything per se, but as I have covered these issues before I won’t rehash old information, what I will suggest is – that if this is your first time on the site, please read this and this.

I would like to thank Link Research Tools for generously providing me with free access to their highly useful API from which all the below correlations are derived.

Please note: while domain level link metrics could be included in this post I have decided to deal with all link related factors in a separate post which will be published in the near future.

Data

Chart: Domain Authority SignalsDescription: Tags: Author:

If you wish to see the keyword by keyword correlations that resulted in the mean correlations reported above, feel free to download this spreadsheet with all the relevant data.

Definitions

Here’s some handy definitions in case you aren’t sure what some of the above factors are;

  • Domain age, is the time since the domain was first registered.
  • PageSpeed rating, is the Google measured score out of 100 on how well a page is performing with regards to several indicators of how quickly a page loads. The higher the score the faster the performance.
  • Days to domain expiry, is the time until the domain expires or needs to be re-registered.
  • Alexa and Compete rank, are both independent measures of how much traffic a site gets. The lower the score, the more traffic the site is supposedly getting.
  • Basic, intermediate and advanced reading levels, are Google measures, of what reading standard a given page is at.

 

Trust indicators

Chart: Domain Trust IndicatorsDescription: Tags: Author:

Google are always trying to figure out how trustworthy a site and its content is. Many theories have emerged as to what factors likely impact the trustworthiness of a whole site.

Domain age, is a classic and while I personally am sceptical about its use as a direct ranking factor, it does seem to have a strong relationship to ranking well in Google, with a near 0.2 correlation, which is highly significant.

How much of this can be written off due to the increased time available to established sites to build links and content and of course just the pure common sense – that a site running for a significant length of time will only have survived by providing for a user’s needs, is hard to determine. Domain age is a factor that’s impossible to manipulate, only worthy for consideration in the procurement of a new web property.

But by saying that its impossible to manipulate, I am then strengthening the case for Google’s use of the factor. So the truth is, its difficult to say whether its a factor or not. It does correlate well, so I would suggest that if you come across a situation where domain age is being considered give it some but not substantial weight in whatever decision you are making.

Homepage PageRank, and PageRank in general is one of the most hotly debated topics on the SEO circuit. We all know of the PageRank Toolbar’s problems and unrepresentative view of the real PageRank Google calculates and uses within their algorithm.

But at the same time the social data Google may pull from APIs may be more complete than the data I have access to and the internal Google link graph is even larger than the gigantic SEOMoz link graph yet we treat these representations of what Google sees as perfectly good.

My point is not that social data and link counts should be disregarded but that perhaps some, if not all of our suspicion at the value of PageRank as a metric is misplaced.

The importance of PageRank is backed up in its mighty performance in the correlation study, the highest correlated domain level authority signal at .244.

This and data on domain level link metrics which I will be publishing in the coming weeks has solidified my view that Google certainly weights and utilises domain link popularity in the ranking of content on a site.

Thus it is reasonable to recommend the already popular theory of building links to the homepage and domain as a whole.

Whether homepage link building warrants special treatment, is dubious and I would in general advise a strategy of building links to a domain as a whole, linking to the homepage only when it feels right and not because of any particular strategy.

Days to domain expiry, is an intriguing and interesting idea, that how long the webmaster registers a domain into the future is an indicator of the webmaster’s intent at creating a long-term user resource.

The marginal correlation at .089 probably suggests its minimal to lack of weight within the algorithm. In saying that, it is an easy and inexpensive factor to manipulate and even a marginal boost in search engine performance would be worth the puny risk.

There have been theories in the past which suggest its importance to newly registered sites, which again complies with basic common sense.

I can recommend registering your domain for 3+ years as a simple, one time, SEO strategy that may or may not impact ranking but certainly has no significant downside.

Site size

Chart: Site Size Correlation DataDescription: Tags: Author:

Alexa and Compete rank, I doubt whether the amount of traffic a site gets is a ranking factor. But its significant correlation may be indicative of a deeper positive correlation from Google towards larger sites.

Whether this is due to ranking factors in favour of larger sites, these sites performing better in non-discriminative factors or something else is worth pondering.

What I will say is that in general sites are large because they are useful to users and its a search engine’s job to try to find sites that are helpful and useful for users.

The same logic should track for the number of pages in Google’s index of a site,while this is highly unlikely to be a direct ranking factor it is perhaps an indicator of other factors actually implemented in the algorithm.

If the data is taken at face value, the then it would appear somewhat surprising that larger sites are performing worse, although the reliability of Google’s provision of this data appears to have impacted results.

I would like to test this factor and other similar indicators further before drawing a definite conclusion.

Geographic targeting

Chart: IP Location of Web ServerDescription: Tags:

The near random correlations for the geographic location of host servers is not surprising and in fact not very interesting at all.

I tested it purely to check whether there was any significant correlation but I didn’t expect there to be as  I conducted my searches from which these correlations are drawn, on Google.com.

The theory of geographic targeting is largely protested to be in use in non USA countries. In the future I hope to conduct studies on non-US versions of Google and to recheck this factor, but for the meantime the data is inconclusive and the current theories within the industry on server location should be followed.

Reading Levels

Chart: Homepage (Google) Reading LevelsDescription: Tags:

While the data is somewhat flawed in that Link Research Tools didn’t return data on a significant number of domains for this factor and the fact that homepage reading levels may not be the same as page level reading levels, the idea and the testing of such a factor is very interesting.

It is something that I believe Google to be using as a factor in the personalisation of search results. For example if they have figure out you are an 8 year old, then maybe you don’t want Shakespeare or research papers returned and you want content written in the language that you as an eight year old use. Not to mention the fact that not many eight year olds are searching for “Macbeth” or “quantum physics”.

A broad correlation study is not conducive to making a recommendation on what language you as a webmaster should use, but it is an interesting topic and something that you should consider when you are writing. Who are your audience and are you writing in their language?

Registrar

Chart: Domain RegistrarsDescription: Tags:

This was a rather cheeky test, and was never likely to reveal a ranking factor, more likely to represent the success achieved by sites registered through the above registrars.

I wasn’t surprised to see GoDaddy with the worst correlation as its add-on products and the clientèle don’t quite indicate quality or high editorial standards, not that many registrars do.

Once you understand and are disciplined with your implementation of SEO and general website ownership standards and strategies then the registrar you choose shouldn’t impact your ranking. But if you are new to the game or likely lead astray, then a registrar and host that promotes these standards may prove a more fruitful path.

Miscellaneous

Chart: Other Domain Authority SignalsDescription: Tags: Author:

The PageSpeed ranking is important, it suggests that if a site follows good principals with regard to the loading of content it will be rewarded with higher rankings. Tests on a page by page basis would be even more conclusive, but this reasonably high correlation for homepage level PageSpeed vindicates some of the excitement generated by Google announcing it used site loading speed in rankings.

The incredibly large correlation for both total and nofollowed external links on the homepage of a site is puzzling to say the least, although the internal data seems more explainable.

While I have some ideas on what may be causing such large correlations, primarily surrounding the type of site that would link to another website from its homepage, I have no real explanation. If you have an idea, guess or have experienced this in the field then please leave a comment below the post.

Social metrics

Chart: Homepage Level Social Media MetricsDescription: Tags:

Wow! I saved the best till last.

Some super interesting social media correlations, with the general theme being that social media is really important.

The fact that Facebook and Google + links to the homepage of a site are the lowest correlated of the bunch is rather strange. The Facebook data could be explained by a possible block on Google accessing FB data. But Google Plus?

Perhaps this indicates that homepage social media shares are not used as a ranking factor but that the other social networks have such a strong user base, that recommend quality content that these social media shares actually represent a measure of the quality of the site as a whole, hence explaining the high correlation.

Also the fact that Google + has a relatively small user base, may mean that its disruptive influence on other factors such as links arising from the additional traffic sent to the site by high levels of sharing of the site on Google + is minimised.

Another explanation is that Google is using Digg, Reddit and StumbleUpon data more than we know about and we should focus more effort on these social networks and Twitter.

But again I’m not certain what these correlations mean, if you have any ideas on these correlations or you have seen Reddit, Digg or StumbleUpon marketing result in increases rankings for your site then please leave a comment below.

Further study of these factors on a page level basis would tell us more about these speculations.

Summary

The correlations for domain level authority signals are comparatively higher than those seen by on page factors.

Domain level factors are ideal starting points for an SEO and often provide a one time, easy change that could, based on the above results, have a substantial impact on ranking.

Even if you disregard the individual factors above as ranking signals, it would still be more than fair to conclude that domain level SEO is very powerful and you should be constantly trying to improve the domain, through site-wide enhancements.

Some of the results, in particular the social and homepage links are somewhat puzzling and I am looking forward to hearing what people think are the likely causes of such strong correlations.

I will be publishing the link related domain authority factors in the coming weeks, so stay tuned.