Links, Pagerank and the Semantic Web – A Prediction
Most bloggers do their own SEO, often in combination with plug-ins, such as All in One SEO Pack or AVH First Defense Against Spam.
Still, it behooves each of us to have at least a working knowledge of what the search engines are looking for and try to find a balance between that and what our readers are looking for.
Even for those that work full-time in SEO that can be a challenge. Search algorithms are constantly changing, the “rules” seem to change nearly as often and the techniques that are most effective vary from one day to another.
One of the things that seems to confound many is Pagerank.
We’ve discussed here before the vast difference between actual pagerank and toolbar pagerank, so I won’t go into that again now. But part of it does lead in to the topic at hand.
Toolbar pagerank was last updated in April of this year.
This marks the longest time between updates that Google has ever shown us. Some are expecting to see the traditional New Year update, but personally, I think we may have already seen the last TBPR update.
The reason I believe that to be the case is that I think Google has been working for some time on developing an alternative to pagerank… something that can’t be so easily “gamed”.
Pagerank is determined solely by the quantity and quality of inbound links, and as such, has directly spawned such practices as link buying, link wheels and link networks. It has also indirectly given birth to pagerank sculpting.
In short, it has promoted link spam, which is something that all bloggers have to deal with.
My theory is that if Google came up with an alternative method of ranking pages that didn’t depend solely upon links to that page, they’d go a long way toward removing the incentive for link spamming.
There are some bright folks at the Google-Plex… I’m sure they thought of it long before I did.
Pagerank, however, was a very well thought out and designed program. It has certainly played an important positive role in the growth of the Internet. And to think about supplanting it with something else would imply a great deal of change to the way SEOs and site owners judge their performance.
So what could it be replaced with?
Again, my theory… nothing. At least, nothing fixed.
I think of pagerank’s replacement as more of a relevancy rank.
Where pagerank is one of many factors that enter into a page search engine ranking, relevancy rank (RR?) would actually be the end result. It would vary, probably greatly, from one search query to another.
Relevancy rank would be computed “on the fly”, by a somewhat expanded algorithm, which would judge a page’s relevancy to a specific search query, and assign it a SERPs ranking based upon that evaluation.
There’d be no indicator bar on the toolbar, because there would be no fixed rank assigned.
Part of that transition would be to greatly devalue the impact of incoming links.
They could theoretically be devalued totally, but I really doubt that will happen.
I think there will always be a call for links, and I suspect they will still play some minor part in the relevancy evaluation. But greatly reducing their impact will effectively remove the incentive for much of the link spam that exists today.
When you think about what Google already accomplishes in mere milliseconds with their algorithms, I really don’t think it’s a big stretch to think that they’re capable to accomplishing this.
And as they (and the other search engines) get closer and closer to achieving the Semantic Web capability that Tim Berners-Lee envisioned when he made this statement in 1999:
“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.”
Step for a moment into a sci-fi scenario.
You sit down at your desk to make some calls, but first you tell your computer to compile a list of references for homestead law in Harris County, Texas. As your intelligent agent, your computer will set out to search for all possible connections to homestead law, Harris County, TX and while you’re chatting with your business associates, it’ll prepare you a document listing all references encountered on the Internet that match your needs.
A few years ago, you’d have assigned that task to your secretary or a research assistant. They’d have used their computer to sift through the tens of thousands of documents and try to select those that were appropriate, weed out the duplicates and compile a list for you.
Depending upon what other tasks they had to accomplish, that might take a couple of days. Your computer could probably accomplish the same thing in well under a minute.
You could further direct your computer to keep tabs on the results and alert you of any changes. A human assistant would have to go through the entire process again, plus identify and list the changes for you.
This is an extremely simple example.
If you needed to have a great deal more detail, your computer might need a few more milliseconds to deliver the results, while your human assistant might need a couple more days.
I’m sure you can easily see the difference in efficiency and time consumption.
The truth about predictions
Like most things regarding the search engines, we know very little about what they are doing or how they go about doing it. Even reports of their past actions may be limited to the point of being questionable.
Consequently, trying to predict what the search engines will do next is, at best, a guessing game.
I simply try to look at the logical course of action, as I see it, and identify what I feel is the most logical path for them to follow. You may have other ideas, or perhaps see a hole in my logic.
I’d love to hear your point of view, either way.
Doc Sheldon is a retired business management consultant, turned perpetual student of SEO. He has been studying SEO for a little under five years, and writing professionally for over thirty. You can see more of his writing on SEO on his website, Doc Sheldon’s SXO Clinic, and his blog, Ramblings of a Madman.