Archived posts from the 'Nofollow' Category

Does Adam Lasnik like Rel=Nofollow or not?

Spotting the headline “Google’s Lasnik Wishes ‘NoFollow Didn’t Exist’” I was quite astonished. My first thought was “logic can’t explain such a reversal”. And it turned out as kinda blog hoax.

Adam’s “I wish nofollow didn’t exist” put back in context clarifies Google’s position:

“My core point […] was that it’d be really nice if nofollow wasn’t necessary. As it stands, it’s an admittedly imperfect yet important indicator that helps maintain the quality of the Web for users.

It’d be nice if there was less confusion about what nofollow does and when it’s useful. It’d be great if we could return to a more innocent time when practically all links to other sites really WERE true votes, folks clearly vouching for a site on behalf of their users.

But we don’t live in perfect, innocent times, and at Google we’re dedicated to doing what it takes to improve the signal-to-noise ratio in search quality.”

I like the “admittedly imperfect” piece ;)

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google going to revamp the rel=nofollow microformat?

I’ve asked Adam Lasnik, Google’s search evangelist:

Adam, what is Google’s take on extending the nofollow functionality by working out a microformat that covers the existing mechanism w/o being that unclear and confusing, and which takes care of similar needs like section targeting on element level and qualified votes as well?

and he answered

Sebastian, nothing’s set in stone. Stuff is likely to evolve :)

That’s an elating signal, thank you Adam. And it leads to a bunch of questions.

Will Google continue to cook nofollow in its secret sauce, revealing morphed semantics (affiliate links), unpopular areas of application (paid links) and changed functionality (no longer fetching the linked resource) every now and then? From my interpretation of Google’s ongoing move to candidness I guess not.

Will Google gather a couple search companies to work out a new standard? I hope not, it would be a mistake not to involve content providers, webmasters, publishers, CMS vendors, even SEOs and opinion makers again.

Will Google ask for input? Will the process of defining a standard for micro crawler directives be an open and public discussion? Are we talking about an extended microformat, limited to the A element’s rel and rev attributes, or does Google think of a broader approach covering for example section targeting and other crawler directives in class attributes on block level too? Will a new or more powerful interfere other norms like , , , or drafts like the not yet that comprehensive microformat (also badly named because it covers inclusion too)? By the way, the links above lead you to interesting thoughts on reach, functionality and implementation of an extended norm replacing nofollow, and I, like many of you, have a couple more ideas and concepts in mind.

I take Adam’s tidbit as call for participation. Dear no-to-nofollow-sayers and nofollow-supporters out there, join the crowd at the white board! Throw in your thoughts, concepts, wishes and ideas.

In the meantime make use of this catalogue of do-follow plugins.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Say No to NoFollow Follow-up

Say NO to NOFOLLOW - copyright jlh-design.comI don’t want to make this the nofollow-blog, but since more and more good folks don’t love the nofollow-beast any more, here is a follow-up on the recent nofollow discussion. Follow the no-to-nofollow trend here:

Loren Baker posts 13 very good reasons why rel=nofollow sucks. He got dugg, buried, but tons of responses in the comments, where most people state that rel=nofollow was a failure with regard to the current amount of comment spam, because the spammers spam for traffic, not link love. Well, that’s true, but rel=nofollow at least nullifies the impact spamming of unmoderated blogs had on search results, says Google. Good point, but is it fair to penalize honest comment authors by nofollow’ing their relevant links by default? Not really. The search engines should work harder on solving this problem algorithmically, and CMS vendors should go back to the white board to develop a reasonable solution. Matt Mullenweg from WordPress admits that “in hindsight, I don’t think nofollow had much of an effect [in fighting comment spam]”, and I hope this insight triggers a well thought out workflow replacing the unethical nofollow-by-default (see follow you, follow me).

At Google’s Webmaster Help Center regular posters nag Googlers with questions like Is rel=nofollow becoming the norm? Google’s search evangelist Adam Lasnik stepped in and states “As you might have noticed, many of the world’s most successful sites link liberally to other sites, and this sort of thing is often appreciated by and rewarded by visitors. And if you’re editorially linking to sites you can personally vouch for, I can’t see a reason to no-follow those.” and “On the whole [nofollow thingie], while Matt’s been pretty forthcoming and descriptive, I do think we Googlers on the whole can do a better job in explaining and justifying nofollow“. Thanks Adam, while explaining Google’s take on rel=nofollow to the great unwashed, why not start a major clean-up to extend this microformat and to make it useful, useable and less confusing for the masses?

While waiting for actions promised by the nofollow inventor, here is a good summary of nofollow clarifications by Googlers. I’ve a ton of respect for Matt, I know he listens and picks reasonable arguments even from negative posts, so stay tuned (I do hope my tiny revamp-nofollow campaign is not seen as negative press by the way).

A very good starting point to examine the destructive impact rel=nofollow had, has, and will have if not revamped, is Carsten Cumbrowski’s essay explaining why rel=nofollow leverages mistrust among people. I do not provide quotes because I want you all to read and reread this great article.

Robert Scoble rethinking his nofollow support says “I was wrong about “NoFollow” … I’m very concerned, for instance, about Wikipedia’s use of nofollow“. Scroll down, don’t miss out on the comments.

Michael Gray’s strong statement Google’s policy on No follow and reviews is hypocritical and wrong is worth a read, he’s backing his point of view providing a complete nofollow-history along with many quotes and nofollow-tidbits.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

The Nofollow-Universe of Black Holes

I pretty much dislike the rel=nofollow fiasco for various reasons, especially its ongoing semantic morphing and often unethical implementation. Recently I wrote about nofollow-confusion and beginning nofollow-insane. Meanwhile the nofollow-debacle went a major step forwards: bloggers fight huge black holes (the completely link-condomized Wikipedia) with many tiny black holes (plug-ins castrating links leading to Wikipedia).

Folks, do you realize that actually you’ve joined the nofollow-nightmare you’re ranting about? Instead of trying to change things with constructive criticism addressing nofollow-supporters, you take the Old Testament approach, escalating an IMHO still remediable aberration. This senseless attitude supports the hapless nofollow-mechanism by the way. You’re acting like defiant kids crying “nofollow is sooooo unfair” while you strike back with tactical weapons unsuitable to solve the nofollow-problem. Devaluing Wikipedia links because Wikipedia is de facto an untrusted source of information OTOH makes sound sense, although semantically rel=nofollow is not the right way to go in this case.

I understand that losing the (imputed!) link juice of a couple Wikipedia links is not nice. However, I don’t buy that these links were boosting SE rankings in the first place –although a few sites having only Wikipedia inbound links drop out of the SERPs currently–, their real value is extremely well targeted traffic, and these links are still clickable.

I agree that Wikipedia’s decision to link-condomize all outbound links is a thoughtless, lazy, and pretty insufficient try to fight vandalizing link droppers. It is even “unfair”, because the black hole Wikipedia now sucks the whole Web’s link juice while giving nothing (except nicely targeted traffic) in return. But I must admit that there were not that many options, since there are no search engine crawler directives on link level providing the granularity Wikipedia probably needs.

Lets imagine the hapless nofollow value of the REL attribute would not exist. In this scenario Wikipedia could implement 4-eyes link tagging as follows:
1. New outgoing links would get tagged rel=”unapproved”. Search engines would not count a vote for the link destination, but follow the link.
2. Later on, when a couple trusted users and/or admins have approved the link, “unapproved” would get removed forever (URL and REL values stored in combination with the article’s URL to automatically reinstate the link’s stage on edits where a link gets removed, added, removed and added again…). So far that would even work with the misguiding “nofollow” value, but an extended microformat would allow meaningful followup-tags like “example”, “source”, “inventor”, “norm”, “worstenemy”, “hownotto” or whatever.

Instead of ranting and vandalizing links we should begin to establish a RFC on crawler directives on HTML element level. That would be a really productive approach.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Dear search engines, please bury the rel=nofollow-fiasko

The misuse of the rel=nofollow initiative is getting out of control. Invented to fight comment spam, nowadays it is applied to commercial links, biased editorial links, navigational links, links to worst enemies (funny example: Matt Cutts links to a SEO-Blackhat with rel=nofollow) and whatever else. Gazillions of publishers and site owners add it to their links for the wrong reasons, simply because they don’t understand its intention, its mechanism, and especially not the ongoing morphing of its semantics. Even professional webmasters and search engine experts have a hard time to follow the nofollow-beast semantically. As more its initial usage gets diluted, as more folks suspect search engines cook their secret sauce with indigestibly nofollow-ingredients.

Not only rel=nofollow wasn’t able to stop blog-spam-bots, it came with a build-in flaw: confusion.

Good news is that currently the nofollow-debate gets stoked again. Threadwatch hosts a thread titled Nofollow’s Historical Changes and Associated Hypocrisy, folks are ranting on the questionable Wikipedia decision to nofollow all outbound links, Google video folks manipulated the PageRank algo by plastering most of their links with rel=nofollow by mistake, and even Yahoo’s top gun Jeremy Zawodny is not that happy with the nofollow-debacle for a while now.

Say NO to NOFOLLOW - copyright jlh-design.comI say that it is possible to replace the unsuccessful nofollow-mechanism with an understandable and reasonable functionality to allow search engine crawler directives on link level. It can be done although there are shitloads of rel=nofollow links out there. Here is why, and how:

The value “nofollow” in the link’s REL attribute creates misunderstandings, recently even in the inventor’s company, because it is, hmmm, hapless.

In fact, back then it meant “passnoreputation” and nothing more. That is search engines shall follow those links, and they shall index the destination page, and they shall show those links in reversed citation results. They just must not pass any reputation or topical relevancy with that link.

There were micro formats better suitable to achieve the goal, for example Technorati’s votelinks, but unfortunately the united search geeks have chosen a value adapted from the robots exclusion standard, which is plain misleading because it has absolutely nothing to do with its (intended) core functionality.

I can think of cases where a real nofollow-directive for spiders on link level makes perfect sense. It could tell the spider not to fetch a particular link destination, even if the page’s robots tag says “follow”, for example printer friendly pages. I’d use an “ignore this link” directive for example in crawlable horizontal popup menus to avoid theme dilution when every page of a section (or site) links to every other page. Actually, there is more need for spider directives on HTML element level, not only in links, for example to tag templated and/or navigational page areas like with Google’s section targeting.

There is nothing wrong with a mechanism to neutralize links in user input. Just the value “nofollow” in the type-of-forward-relationship attribute is not suitable to label unchecked or not (yet) trusted links. If it is really necessary to adopt a well known value from the robots exclusion standard (and don’t misunderstand me, reusing familiar terms in the right context is a good idea in general), the “noindex” value would have been be a better choice (although not perfect). “Noindex” describes way better what happens in a SE ranking algo: it doesn’t index (in its technical meaning) a vote for the target. Period.

It is not too late to replace the rel=nofollow-fiasco with a better solution which could take care of some similar use cases too. Folks at Technorati, the W3C and whereever have done the initial work already, so it’s just a tiny task left: extending an existing norm to enable a reasonable granularity of crawler directives on link level, or better for HTML elements at all. Rel=nofollow would get deprecated, replaced by suitable and standardized values, and for a couple years the engines could interpret rel=nofollow in its primordial meaning.

Since the rel=nofollow thingy exists, it has confused gazillions of non-geeky site owners, publishers and editors on the net. Last year I’ve got a new client who added rel=nofollow to all his internal links because he saw nofollowed links on a popular and well ranked site in his industry and thought rel=nofollow could perhaps improve his own rankings. That’s just one example of many where I’ve seen intended as well as mistakenly misuse of the way too geeky nofollow-value. As Jill Whalen points out to Matt Cutts, that’s just the beginning of net-wide nofollow-insane.

Ok, we’ve learned that the “nofollow” value is a notional monster, so can we please have it removed from the search engine algos in favour of a well thought out solution, preferably asap? Thanks.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Yahoo’s handling of the link condom

Folks are wondering why nofollow-links are shown in Yahoo’s backlink searches, site explorer results etc., and I’m wondering why the heck they’re wondering.

First, that’s not a new thing, the link condom has nothing to do with the ability to locate backlinks, so Yahoo always listed castrated citations and votes in link: and linkdomain: searches.

Second, there is absolutely nothing wrong with Yahoo’s handling of rel=nofollow links. The value “nofollow” of the REL attribute creates misunderstandings, because it is, hmmm, hapless.

In fact, it means “passnoreputation” and nothing more. That is search engines shall follow those links, and they shall index the destination page, and they shall show those links in reversed citation results.

There were micro formats better suitable to achieve the goal, for example Technorati’s votelinks, but unfortunately the search geeks have chosen a value adapted from the robots exclusion standard, which is plain misleading because it has absolutely nothing to do with its functionality.

So, since we now know that the “nofollow” value is a notional monster, can we please have it removed from the search engine algos asap? Thanks.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Is the spam condom efficient and ethical?

Jim Boykin from WeBuildPages raises a few very good questions in his 2-part-essay on link condoms in blog comments. Jim finally asks “Is the rel=nofollow our friend or our enemy?” and I’ve no definite answer.

If Blogger would allow me to opt out of the comment condom thingy I would do it with this blog. When I don’t delete a comment containing a link, then the poster has something to say, and an embedded link doesn’t deserve castration regardless whether I agree or not. Well, perhaps I’d unlink overdone URL drops in some cases.

If I would run a popular blog, I’d like a white-list approach best. That is every link in comments gets sterilized by default and all posts are pre-moderated, captchas in place. Trusted users could post instantly without link condom, and I could pull the condom from particular comments. I’m not aware of any blog software handling it this way, unfortunately.

Is the spam condom efficient? Nope. Comment moderation, captchas, spam filters, perhaps even registering users is enough to prevent a blog from comment spam. Also, many blogs run outdated, never updated pre-nofollow software, that is savvy spammers can still inject crappy links at enough places to keep it profitable.

Is the spam condom ethical? Nope. At least not when the blogger can’t opt out. Not every comment is spam. Comments add content to a blog. Why penalize the content vendors?

Tags: without



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Serious Disadvantages of Selling Links

There is a pretty interesting discussion going on search engine spam at O’Reilly Radar. This topic is somewhat misleading, the subject is passing PageRank™ by paid ads on popular sites. Read the whole thread, lots of sound folks express their valuable and often fascinating opinions.

My personal statement is a plain “Don’t sell links for passing PageRank™. Never. Period.”, but the intention of ad space purchases isn’t always that clear. If an ad isn’t related to my content, I tend to put client sided affiliate links on my sites, because search engine spiders didn’t follow them for a long time. Well, it’s not that easy any more.

However, Matt Cutts ‘revealed’ an interesting fact in the thread linked above. Google indeed applies no-follow-logic to Web sites selling (at least unrelated) ads:

… [Since September 2003] …parts of perl.com, xml.com, etc. have not been trusted in terms of linkage … . Remember that just because a site shows up for a “link:” command on Google does not mean that it passes PageRank, reputation, or anchortext.

This policy wasn’t really a secret before Matt’s post, because a critical mass of high PR links not passing PR do draw a sharp picture. What many site owners selling links in ads have obviously never considered, is the collateral damage with regard to on site optimization. If Google distrusts a site’s linkage, outbound and internal links have no power. That is the optimization efforts on navigational links, article interlinking etc. are pretty much useless on a site selling links. Internal links not passing relevancy via anchor text is probably worse than the PR loss, because clever SEOs always acquire deep inbound links.

Rescue strategy:

1. Implement the change recommended by Matt Cutts:

Google’s view on this is … selling links muddies the quality of the web and makes it harder for many search engines (not just Google) to return relevant results. The rel=nofollow attribute is the correct answer: any site can sell links, but a search engine will be able to tell that the source site is not vouching for the destination page.

2. Write Google (possibly cc spam report and reinclusion request) that you’ve changed the linkage of your ads.

3. Hope and pray, on failure goto 2.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3