Biyernes, Setyembre 21, 2012

Rank Modifying Spammers

My good friend Bill at SEOByTheSea has unearthed a Google patent that will likely raise eyebrows, whilst others will have their suspicions confirmed.

The patent is called Ranking Documents. When webmasters alter a page, or links to a page, the system may not respond immediately to those changes. Rather, the system may change rankings in unexpected ways.

A system determines a first rank associated with a document and determines a second rank associated with the document, where the second rank is different from the first rank. The system also changes, during a transition period that occurs during a transition from the first rank to the second rank, a transition rank associated with the document based on a rank transition function that varies the transition rank over time without any change in ranking factors associated with the document.

Further:

During the transition from the old rank to the target rank, the transition rank might cause:
  • a time-based delay response,
  • a negative response
  • a random response, and/or
  • an unexpected response

So, Google may shift the rankings of your site, in what appears to be a random manner, before Google settles on a target rank.

Let's say that you're building links to a site, and the site moves up in the rankings. You would assume that the link building has had a positive effect. Not so if the patent code is active, as your site may have already been flagged.

Google then toys with you for a while before sending your site plummeting to the target rank. This makes it harder to determine cause and effect.

Just because a patent exists doesn't mean Google is using it, of course. This may be just be another weapon in the war-of-FUD, but it sounds plausible and it?s something to keep in mind, especially if you're seeing this type of movement.

The Search Engine As Black Box

In ancient times (1990s), SEO thrived because search engines were stupid black boxes. If you added some keywords here, added a few links there, the black box would respond in a somewhat predictable, prescribed, fashion. Your rankings would rise if you guessed what the black box liked to "see", and you plummeted if you did too much of what the black box liked to see!

Ah, the good old days.

These days, the black box isn?t quite so stupid. It?s certainly a lot more cryptic. What hasn?t changed, however, is the battle line drawn between webmasters and search engines as they compete for search visitor attention.

If there are any webmasters still under the illusion that Google is the SEOs friend, that must be a very small club, indeed. Google used to maintain a - somewhat unconvincing - line that if you just followed their ambiguous guidelines (read: behaved yourself) then they would reward you. It was you and Google on the good side, and the evil spammers on the other.

Of late, Google appear to have gotten bored of maintaining any pretense, and the battle lines have been informally redrawn. If you?re a webmaster doing anything at all that might be considered an effort to improve rank, then you're a "spammer". Google would no doubt argue this has always been the case, even if you had to read between the lines to grasp it. And they?d be right.

Unconvinced?

Look at the language on the patent:

The systems and methods may also observe spammers? reactions to rank changes caused by the rank transition function to identify documents that are actively being manipulated. This assists in the identification of rank-modifying spammers.

?Manipulated?? ?Rank modifying spammers?? So, a spammer is someone who attempts to modify their rank?

I?ve yet to meet a webmaster who didn?t wish to modify their rank.

Google As A Competitor

Google?s business model relies on people clicking ads. In their initial IPO filing, Google identified rank manipulation as a business risk.

We are susceptible to index spammers who could harm the integrity of our web search results. There is an ongoing and increasing effort by ?index spammers? to develop ways to manipulate our web search results

It?s a business risk partly because the result sets need to be relevant for people to return to Google. The largely unspoken point is Google wants webmasters to pay to run advertising, not get it for ?free?, or hand their search advertising budget to an SEO shop.

Why would Google make life easy for competitors?

The counter argument has been that webmasters provide free content, which the search engines need in order to attract visitors in the first place. However, now relevant content is plentiful, that argument has been weakened. Essentially, if you don't want to be in Google, then block Google. They won't lose any sleep over it.

What has happened, however, is that the incentive to produce quality content, with search engines in mind, has been significantly reduced. If content can be scraped, ripped-off, demoted and merely used as a means to distract the search engine user enough to maybe click a few search engine ads, then where is the money going to come from to produce quality content? Google may be able to find relevant content, but "relevant" (on-topic) and "quality" (worth consuming) are seldom the same thing

One content model that works in such as environment is content that is cheap to produce. Cheap content can be quality content, but like all things in life, quality tends to come with a higher price tag. Another model that works is loss-leader content, but then the really good stuff is still hidden from view, and it's still hard to do this well, unless you've established considerable credibility - which is still expensive to do.

This is the same argument the newspaper publishers have been making. The advertising doesn?t pay enough to cover the cost of production and make a profit - so naturally the winner in this game cuts production cost until the numbers do add up. What tends to be sacrificed in this process - is quality.

NFSW Corp, a new startup by ex-TechCrunch and Guardian columnist writer Paul Carr has taken the next step. They have put everything behind a paywall. There is no free content. No loss-leaders. All you see is a login screen.

Is this the future for web publishing? If so, the most valuable content will not be in Google. And if more and more valuable content lies beyond Google's reach, then will fewer people bother going to Google in the first place?

The Happy Middle

Google argue that they focus on the user. They run experiments to determine search quality, quality as determined by users.

Here?s how it works. Our engineers come up with some insight or technique and implement a change to the search ranking algorithm . They hope this will improve search results, but at this point it?s just a hypothesis. So how do we know if it?s a good change? First we have a panel of real users spread around the world try out the change, comparing it side by side against our unchanged algorithm. This is a blind test ? they don?t know which is which. They rate the results, and from that we get a rough sense of whether the change is better than the original. If it isn?t, we go back to the drawing board. But if it looks good, we might next take it into our usability lab ? a physical room where we can invite people in to try it out in person and give us more detailed feedback. Or we might run it live for a small percentage of actual Google users, and see whether the change is improving things for them. If all those experiments have positive results, we eventually roll out the change for everyone"

Customer focus is, of course, admirable, but you?ve got to wonder about a metric that doesn?t involve the needs of publishers. If publishing on the web is not financially worthwhile, then, over time, the serps will surely degrade in terms of quality as a whole, and users will likely go elsewhere.

There is evidence this is already happening. Brett at Webmasterworld pointed out that there is a growing trend amongst consumers to skip Google altogether and just head for the Amazon, and other sites, directly. Amazon queries are up 73 percent in the last year.

There may well be a lot of very clever people at Google, but they do not appear to be clever enough to come up with a model that encourages webmasters to compete with each other in terms of information quality.

If Google doesn?t want the highest quality information increasingly locked up behind paywalls, then it needs to think of a way to nurture and incentivise the production of quality content, not just relevant content. Tell publishers exactly what content Google wants to see rank well and tell them how to achieve it. There should be enough money left on the table for publishers i.e. less competition from ads - so that everyone can win.

I?m not going to hold my breath for this publisher nirvana, however. I suspect Google's current model just needs content to be "good enough."

Categories: 

Source: http://www.seobook.com/rank-modifying-spammers

seo tool land seo leaders online traffic mystic good backlink see book

Walang komento:

Mag-post ng isang Komento