Archived posts from the 'Nofollow' Category

LZZR Linking™

LZZR Link LoveIn why it is a good thing to link out loud LZZR explains a nicely designed method to accelerate the power of inbound links. Unfortunately this technique involves Yahoo! Pipes, which is evil. Certainly that’s a nice tool to compose feeds, but Yahoo! Pipes automatically inserts the evil nofollow crap. Hence using Pipes’ feed output to amplify links faults caused by the auto-nofollow. I’m sure LZZR can replace this component with ease, if that’s not done already.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Blogger abuses rel-nofollow due to ignorance

I had planned a full upgrade of this blog to the newest blogger version this weekend. The one and only reason to do the upgrade was the idea that I perhaps could disable the auto-nofollow functionality in the comments. Well, what I found was a way to dofollow the author’s link by editing the <dl id='comments-block'> block, but I couldn’t figure out how to disable the auto-nofollow in embedded links.

Considering the hassles of converting all the template hacks into the new format, and the risk of most probably losing the ability to edit code my way, I decided to stick with the old template. It just makes no sense for me to dofollow the author’s link, when a comment author’s links within the content get nofollow’ed automatically. Andy Beard and others will hate me now, so let me explain why I don’t move this blog to my own domain using a not that insane software like WordPress.

  • I own respectively author on various WordPress blogs. Google’s time to index for posts and updates from this blogspot thingy is 2-3 hours (Web search, not blog search). My Wordpress blogs, even with higher PageRank, suffer from a way longer time to index.
  • I can’t afford the time to convert and redirect 150 posts to another blog.
  • I hope that Google/Blogger can implement reasonable change requests (most probably that’s just wishful thinking).

That said, WordPress is a way better software than Blogger. I’ll have to move this blog if Blogger is not able to fulfill at least my basic needs. I’ll explain below why I think that Blogger lacks any understanding of the rel-nofollow semantics. In fact, they throw nofollow crap on everything they get a hand on. It seems to me that they won’t stop jeopardizing the integrity of the Blogosphere (at least where they control the linkage) until they get bashed really hard by a Googler who understands what rel-nofollow is all about. I nominate Matt Cutts, who invented and evolved it, and who does not tolerate BS.

So here is my wishlist. I want (regardless of the template type!)

  • A checkbox “apply rel=nofollow to comment author links”
  • A checkbox “apply rel=nofollow to links within comment text”
  • To edit comments, for example to nofollow links myself, or to remove offensive language
  • A checkbox “apply rel=nofollow to links to label/search pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to label/search pages”
  • A checkbox “apply rel=nofollow to links to archive pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to archive pages”
  • A checkbox “apply rel=nofollow to backlink listings”

As for the comments functionality, I’d understand when these options get disabled when comment moderation is set to off.

And here are the nofollow-bullshit examples.

  • When comment moderation and captchas are activated, why are comment author links as well as links within the comments nofollow’ed? Does blogger think their bloggers are minor retards? I mean, when I approve a comment, then I do vouch for it. But wait! I can’t edit the comment, so a low-life link might slip through. Ok, then let me edit the comments.
  • When I’ve submitted a comment, the link to the post is nofollowed. Nofollow insane II.This page belongs to the blog, so why the fudge does Blogger nofollow navigational links? And if it makes sense for a weird reason not understandable by a simple webmaster like me, why is the link to the blog’s main page as well as the link to the post one line below not nofollow’ed? Linking to the same URL with and without rel-nofollow on the same page deserves a bullshit award.
  • Nofollow insane III. (dashboard)On my dashbord Blogger features a few blogs as “Blogs Of Note”, all links nofollow’ed. These are blogs recommended by the Blogger crew. That means they have reviewed them and the links are clearly editorial content. They’re proud of it: “we’ve done a pretty good job of publishing a new one each day”. Blogger’s very own Blogs Of Note blog does not nofollow the links, and that’s correct.

    So why the heck are these recommended blogs nofollow’ed on the dashboard? Nofollow insane III. (blogspot)

  • Blogger inserted robots meta tags “nofollow,noindex” on each and every blog hosted outside the controlled blogspot.com domain earlier this year.
  • Blogger inserted robots meta tags “nofollow,noindex” on Google blogs a few days ago.

If Blogger’s recommendation “Check google.com. (Also good for searching.)” is a honest one, why don’t they invest a few minutes to educate themselves on rel-nofollow? I mean, it’s a Google-block/avoid-indexing/ranking-thingy they use to prevent Google.com users from finding valuable contents hosted on their own domains. And they annoy me. And they insult their users. They shouldn’t do that. That’s not smart. That’s not Google-ish.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google to kill the power of links

Well, a few types of links will survive and don’t do evil in Google’s search index ;)    I’ve updated my first take on Google’s updated guidelines stating paid links and reciprocal links are evil. Well, regardless whether one likes or dislikes this policy, it’s already factored in - case closed by Google. There are so many ways to generate natural links …

The official call for paid-link reports is pretty much disliked across the boards:
Google is Now The Morality Police on the Internet
Google’s Ideal Webmaster: Snitch, Rake It In And Don’t Deliver
Other sites can hurt your ranking
Google’s Updated Webmaster Guidelines Addresses Linking Practices
Google clarifies its stance on links

More information, and discussion of paid/exchanged links in my pamphlets:
Matt Cutts and Adam Lasnik define “paid link”
Where is the precise definition of a paid link?
Full disclosure of paid links
Revise your linkage
Link monkey business is not worth a whoop
Is buying and selling links risky? (02/2006)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google nofollow’s itself

Awesome. Nofollow-insane at its best. Check the source of Google’s Webmaster Blog. In HEAD you’ll find an insane meta tag:
<meta name=”ROBOTS” content=”NOINDEX,NOFOLLOW” />

Well, that’s one of many examples. Read the support forums. Another case of Google nofollow’ing herself: Google fun

Matt thought that all teams understood the syntax and semantics of rel-nofollow. It seems to me that’s not the case. I really can’t blame Googlers applying rel-nofollow or even nofollow/noindex meta tags to everything they get a hand on. It is not understandable. It’s not useable. It’s misleading. It’s confusing. It should get buried asap.

Hat tip to John (JLH’s post).

Update 1: A friendly Googler just told me that a Blogger glitch (pertaining only Google blogs) inserted the crawler-unfriendly meta element, it should be solved soon. I thought this bug was fixed months ago ... if page.isPrivate == true by mistake then insert “<meta content=’NOINDEX,NOFOLLOW’ name=’ROBOTS’ />” … (made up)

Update 2: The ‘noindex,nofollow’ robots meta tag is gone now, and the Webmaster Central Blog got a neat new logo:
Google Webmaster Central Blog - Offic'ial news on crawling and indexing sites for the Google index (I’d add ALT and TITLE text: alt="Google Webmaster Central Blog - Official news on crawling and indexing sites for the Google index" title="Official news on crawling and indexing sites for the Google index")



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google hunts paid links and reciprocal linkage

Matt Cutts and Adam Lasnik have clarified Google’s take on paid links and overdone reciprocal linkage. Some of their statements are old news, but it surely helps to have a comprehensive round-up in the context of the current debate on paid links.

So what –in short– does Google consider linkspam:
Artificial link schemes, paid links and uncondomized affiliate links, overdone reciprocal linkage and interlinking.

All sorts of link schemes designed to increase a site’s ranking or PageRank. Link scheme means for example mass exchange of links pages, repeated chunks of links per site, fishy footer links, triangular PageRank boosting, 27-way-linkage where in the end only the initiator earns a few inbounds because the participants are confused, and “genial” stuff like that. Google’s pretty good at identifying link farming, and bans or penalizes accordingly. That’s old news, but such techniques are still used, widely.

Advice: don’t participate, Google will catch you eventually.

Paid links, if detected or reported, get devalued. That is, they don’t help the link destination’s search engine rankings, and in some cases the source will lose its ability to pass reputation via links. Google does this more or less silently since 2003 at least, probably longer, but until today there was no precise definition of risky paid links.

That’s going to change. Adam Lasnik, commenting Eric Enge’s “It seems to me that one of the more challenging aspects of all of this is that people have gotten really good at buying a link that show no indication that they are purchased.”

Yes and no, actually. One of the things I think Matt has commented about in his blog; it’s what we joking refer to as famous last words, which is “well, I have come up with a way to buy links that is completely undetectable”.

As people have pointed out, Google buys advertising, and a lot of other great sites engage in both the buying and selling of advertising. There is no problem with that whatsoever. The problem is that we’ve seen quite a bit of buying and selling for the very clear purpose of transferring PageRank. Some times we see people out there saying “hey, I’ve got a PR8 site” and, “this will give you some great Google boost, and I am selling it for just three hundred a month”. Well, that’s blunt, and that’s clearly in violation of the “do not engage in linking schemes that are not permitted within the webmaster guidelines”.

Two, taking a step back, our goal is not to catch one hundred percent of paid links [emphasis mine]. It’s to try to address the egregious behavior of buying and selling the links that focus on the passing of PageRank. That type of behavior is a lot more readily identifiable then I think people give us credit for.

So it seems Google’s just after PageRank selling. Adam’s following comments on the use and abuse of rel-nofollow emphasizes this interpretation:

I understand there has been some confusion on that, both in terms of how it [rel=nofollow] works or why it should be used. We want links to be treated and used primarily as votes for a site, or to say I think this is an interesting site, and good site. The buying and selling of links without the use of Nofollow, or JavaScript links, or redirects has unfortunately harmed that goal. We realize we cannot turn the web back to when it was completely noncommercial and we don’t want to do that [emphasis mine]. Because, obviously as Google, we firmly believe that commerce has an important role on the Internet. But, we want to bring a bit of authenticity back to the linking structure of the web. […] our interest isn’t in finding and taking care of a hundred percent of links that may or may not pass PageRank. But, as you point out relevance is definitely important and useful, and if you previously bought or sold a link without Nofollow, this is not the end of the world. We are looking for larger and more significant patterns [emphasis mine].

Don’t miss out on Eric Enge’s complete interview with Adam Lasnik, it’s really worth bookmarking for future references!

Matt Cutts has updated (May 12th, 2007) an older and well linked post on paid links. It also covers thoughts on the value of directory links. Here are a few quotes, but don’t miss out on Matt’s post:

… we’re open to semi-automatic approaches to ignore paid links, which could include the best of algorithmic and manual approaches.

Q: Now when you say “paid links”, what exactly do you mean by that? Do you view all paid links as potential violations of Google’s quality guidelines?
A: Good question. As someone working on quality and relevance at Google, my bottom-line concern is clean and relevant search results on Google. As such, I care about paid links that flow PageRank and attempt to game Google’s rankings. I’m not worried about links that are paid but don’t affect search engines. So when I say “paid links” it’s pretty safe to add in your head “paid links that flow PageRank and attempt to game Google’s rankings.”

Q: This is all well and fine, but I decide what to do on my site. I can do anything I want on it, including selling links.
A: You’re 100% right; you can do absolutely anything you want on your site. But in the same way, I believe Google has the right to do whatever we think is best (in our index, algorithms, or scoring) to return relevant results.

Q: Hey, as long as we’re talking about directories, can you talk about the role of directories, some of whom charge for a reviewer to evaluate them?
A: I’ll try to give a few rules of thumb to think about when looking at a directory. When considering submitting to a directory, I’d ask questions like:
- Does the directory reject URLs? If every URL passes a review, the directory gets closer to just a list of links or a free-for-all link site.
- What is the quality of urls in the directory? Suppose a site rejects 25% of submissions, but the urls that are accepted/listed are still quite low-quality or spammy. That doesn’t speak well to the quality of the directory.
- If there is a fee, what’s the purpose of the fee? For a high-quality directory, the fee is primarily for the time/effort for someone to do a genuine evaluation of a url or site.
Those are a few factors I’d consider. If you put on your user hat and ask “Does this seem like a high-quality directory to me?” you can usually get a pretty good sense as well, or ask a few friends for their take on a particular directory.

To get a better idea on how Google’s search quality team chases paid links, read Brian White’s post Paid Link Schemes Inside Original Content.

Advice: either nofollow paid links, or don’t get caught. If you buy links, pay only for the traffic, because with or without link condom there’s no search engine love involved.

Affiliate links are seen as kinda subset of paid links. Google can identify most (unmasked) affiliate links. Frankly, there’s no advantage in passing link love to sponsors.

Advice: nofollow.

Reciprocal links without much doubt nullify each other. Overdone reciprocal linkage may even cause penalties, that is the reciprocal links area of a site gets qualified as link farm, for possible consequences scroll up a bit. Reciprocal links are natural links, and Google honors them if the link profile of a site or network does not consist of a unnnatural high number of reciprocal or triangular link exchanges. It may be that natural reciprocal links pass (at least a portion of) PageRank, but no (or less than one-way links) revelancy via anchor text and trust or other link reputation.

Matt Cutts discussing “Google Hell”:

Reciprocal links by themselves aren’t automatically bad, but we’ve communicated before that there is such a thing as excessive reciprocal linking. […] As Google changes algorithms over time, excessive reciprocal links will probably carry less weight. That could also account for a site having more pages in supplemental results if excessive reciprocal links (or other link-building techniques) begin to be counted less. As I said in January: “The approach I’d recommend in that case is to use solid white-hat SEO to get high-quality links (e.g. editorially given by other sites on the basis of merit).”

Advice: It’s safe to consider reciprocal links somewhat helpful, but don’t actively chase for reciprocal links.

Interlinking all sites in a network can be counterproductive, but selfish cross-linking is not penalized in general. There’s no “interlinking penalty” when these links make sound business sense, even when the interlinked sites aren’t topically related. Interlinking sites handling each and every yellow page category on the other hand may be considered overdone. In some industries like adult entertainment, where it’s hard to gain natural links, many webmasters try to boost their rankings with links from other (unrelated) sites they own or control. Operating hundreds or thousands of interlinked travel sites spread on many domains and subdomains is risky too. In the best case such linking patterns may be just ignored by Google, that is they’ve no or very low impact on rankings at all, but it’s easy to convert a honest network into a link farm by mistake.

Advice: Carefully interlink your own sites in smaller networks, but partition these links by theme or branch in huge clusters. Consider consolidating closely related sites.

So what does all that mean for Webmasters?

Some might argue “if it ain’t broke don’t fix it”, in other words “why should I revamp my linkage when I rank fine?”. Well, rules like “any attempt to improve on a system that already works is pointless and may even be detrimental” are pointless and detrimental in a context where everything changes daily. Especially, when the tiny link-systems designed to fool another system, passively interact with that huge system (the search engine polls linkage data for all kinds of analyses). In that case the large system can change the laws of the game at any time to outsmart all the tiny cheats. So just because Google didn’t discover all link schemes or shabby reciprocal link cycles out there, that does not mean the participants are safe forever. Nothing’s set in stone, not even rankings, so better revise your ancient sins.

Bear in mind that Google maintains a database containing all links in the known universe back to 1998 or so, and that a current penalty may be the result of a historical analysis of a site’s link attitude. So when a site is squeaky clean today but doesn’t rank adequately, consider a reinclusion request if you’ve cheated in the past.

Before you think of penalties as the cause of downranked or even vanished pages, analyze your inbound links that might have started counting for less. Pull all your inbound links from Site Explorer or Webmaster Central, then remove questionable sources from the list:

  • Paid links and affiliate links where you 301-redirect all landing pages with affiliate IDs in the query string to a canonical landing page,
  • Links from fishy directories, links lists, FFAs, top rank lists, DMOZ-clones and stuff like that,
  • Links from URLs which may be considered search results,
  • Links from sites you control or which live off your contents,
  • Links from sites engaged in reciprocal link swaps with your sites,
  • Links from sites which link out to too many questionable pages in link directories or where users can insert links without editorial control,
  • Links from shabby sites regardless their toolbar PageRank,
  • Links from links pages which don’t provide editorial contents,
  • Links from blog comments, forum signatures, guestbooks and other places where you can easily drop URLs,
  • Nofollow’ed links and links routed via uncrawlable redirect scripts,

Judge by content quality, traffic figures if available, and user friendliness, not by toolbar PageRank. Just because a link appears in reverse citation results, that does not mean it carries any weight.

Look at the shrinked list of inbound links and ask yourself where on the SERPs a search engine should rank your stuff based on these remaining votes. Frustrated? Learn the fine art of link building from an expert in the field.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

How Google & Yahoo handle the link condom

Loren Baker over at SEJ got a few official statements on use and abuse of the rel-nofollow microformat by the major players: How Google, Yahoo & Ask treat NoFollow’ed links. Great job, thanks!

Ask doesn’t “officially” support nofollow, whatever that means. Loren didn’t ask MSN, probably because he didn’t expect that they’ve even noticed that they officially support nofollow since 2005, same procedure with sitemaps by the way. Yahoo implemented it along the specs, and Google stepped way over the line the norm sets. So here is the difference:

1. Do you follow a nofollow’ed link?
Google: No (longer)
Yahoo: Yes

2. Do you index the linked page following a nofollow’ed link?
Google: Obsolete, see 1.
Yahoo: Yes

3. Does your ranking algos factor in reputation, anchor/alt/title text or whichever link love sourced from a nofollow’ed link?
Google: Obsolete, see 1.
Yahoo: No

4. Do you show nofollow’ed links in reverse citation results?
Google: Yes (in link: searches by accident, in Webmaster Central if the source page didn’t make it into the supplemental index)
Yahoo: Yes (Site Explorer)

Q&A#4 is made up but accurate. I think it’s safe to assume that MSN handles the link condom like Yahoo. (Update: As Loren clarifies in the comments, he asked MSN search but they didn’t answer in a timely fashion.)

And here’s a remarkable statement from Google’s search evangelist Adam Lasnik, who may like nofollow or not:

On a related note, though, and echoing Matt’s earlier sentiments … we hope and expect that more and more sites — including Wikipedia — will adopt a less-absolute approach to no-follow … expiring no-follows, not applying no-follows to trusted contributors, and so on.

Bravo!

Related link: rel=”nofollow” Google, Yahoo and MSN



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Where is the precise definition of a paid link?

Good questions:

How many consultants provide links through to the companies they work for?
I do.

How many software firms provide links through to their major corporate clients?
Not my company. Never going to happen.

If you make a donation to someone, and they decide to give you a link back, is that a paid link?
Nope.

If you are a consultant, and are paid to analyse a company, but to make the findings known publicly, are you supposed to stick nofollow on all the links?
Nope.

If you are a VC or Angel investor, should you have to use NoFollow linking through to companies in your investment portfolio?
Nope.

Are developers working on an open-source project allowed a link back to their sites (cough Wordpress) Yep, and then use that link equity to dominate search engines on whatever topic they please?
Hmmmm, if it really works that way, why not?

If you are a blog network, or large internet content producer, is it gaming Google to have links to your sister sites, whether there is a direct financial connection or not?
Makes business sense, so why should those links get condomized? Probably a question of quantity. No visitor would follow a gazillion of links to blogs handling all sorts of topis the yellow pages have categories for.

Should a not for profit organisation link through to their paid members with a live link?
Sure, perfectly discloses relationships and their character.

A large number of Wordpress developers have paid links on their personal sites, as do theme and plugin developers.
What’s wrong with that? Maybe questionable (in the sense of useless) on every page, but perfectly valid on home page, about page and so on if disclosed. As for ads, that sort of paid links is valid on every page - nofollow’ing ads just avoids misunderstandings.

If you write a blog post, thanking your sponsors, should you use nofollow?
Yep.

Some people give away prizes for links, or offer some kind of reciprocation.
If the awards are honest and truly editorial, linking back is just good practice.

If you are an expert in a particular field, and someone asks you to write a review of their site, and the type of review you write means that writing that content might take 10 hours of your time to do due diligence, is it wrong to accept some kind of monetary contribution? Just time and material?
In such a situation, why would you be forced to use nofollow on all links to the site being reviewed?
Disclosing the received expense allowance there’s nothing wrong with uncondomized links.

Imagine someone created a commercial Wikipedia, and paid $5 for every link made to it.
Don’t link. The link would be worth more than five bucks and the risks involved can cost way more than five bucks.

Where is the precise definition of a paid link?
Now that’s the best question at all!

Disclaimer: Yes/No answers are kinda worthless without a precisely defined context. Thus please read the comments.

Related thoughts: Should Paid Links Influence Organic Rankings? by Mark Jackson at SEW
Paid Link Schemes Inside Original Content by Brian White, also read Matt’s updated post on paid links.

Update: Google’s definition of paid links and other disliked linkage considered “linkspam”



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Yahoo Pipes jeopardizes the integrity of the Internet

Update: This post, initially titled “No more nofollow-insane at Google Reader”, then updated as “(No) more nofollow-insane at Google Reader”, accused Google Reader of inserting nofollow crap. I apologize for my lazy and faulty bug report. Read the comments.

I fell in love with Yahoo pipes because that tool allowed me to funnel the tidbits contained in a shitload of noise into a more or less clear signal. Instead of checking hundreds of blog feeds, search query feeds and whatever else, I was able to feed my preferred reader with actual payload extracted from vast loads of paydirt digged from lots of sources.

Now that I’ve learned that Yahoo pipes is evil I guess I must code the filters myself. Nofollow insane is not acceptable. Nofollow madness jeopardizes the integrity of the Internet which is based on free linkage. I don’t need no stinking link condoms sneakily forced by nice looking tools utilizing nifty round corners. I’ll be way happier with a crappy and uncomfortable PHP hack feeded with OPML files and conditions pulled from a manually edited MySQL table.

Here is the evidence right from the Yahoo pipe output:
Also, abusing my links with target=”_blank” is not nice.


Initial post and its first update:

I’m glad Google has removed the auto-nofollow on links in blog posts. When I add a feed I trust its linkage and I don’t need no stinking condoms on pages nobody except me can see unless I share them. Thanks!

Update - Nick Baum, can you hear me?

It seems the nofollow-madness is not yet completely buried. Here is a post of mine and what Google Reader shows me when I add my blog’s feed:
Click to enlarge
And here is the same post filtered thru a Yahoo pipe:
Click to enlarge
So please tell me: why does Google auto-nofollow a link to Vanessa Fox when she gets linked via Yahoo, and uncondomizes the link from Google’s very own blogspot dot dom? Curious …



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Beware of the narrow-minded coders

or Ignorance is no excuse

Long winded story on SEO-ignorant pommy coders putting their customers at risk. Hop away if e-commerce software vs. SEO dramas don’t thrill you.

Recently I’ve answered a “Why did Google deindex my pages” question in Google’s Webmaster Forum. It turned out that the underlying shopping cart software (EROL) maintained somewhat static pages as spider fodder, which redirect human visitors to another URL serving the same contents client sided. Silly thing to do, but pretty common for shopping carts. I’ve used the case as an example of a nice shopping cart coming with destructive SEO in a post on flawed shopping carts in general.

Day by day other site owners operating Erol driven online shops popped up in the Google Groups or emailed me directly, so I realized that there is a darn widespread problem involving a very popular UK based shopping cart software responsible for Google cloaking penalties. From my contacts I knew that Erol’s software engineers and self-appointed SEO experts believe in weird SEO theories and don’t consider that their software architecture itself could be the cause of the mess. So I wrote a follow-up addressing Erol directly. Google penalizes Erol-driven e-commerce sites explaines Google’s take on cloaking and sneaky JavaScript redirects to Erol and its customers.

My initial post got linked and discussed in Erol’s support forum and kept my blog stats counter buzzy over the weekend. Accused of posting crap I showed up and posted a short summary over there:

Howdy, I’m the author of the blog post you’re discussing here: Why eCommerce systems suck

As for crap or not crap, judge yourself. This blog post was addressed to ecommerce systems in general. Erol was mentioned as an example of a nice shopping cart coming with destructive SEO. To avoid more misunderstandings and to stress the issues Google has with Erol’s JavaScript redirects, I’ve posted a follow-up: Google deindexing Erol-driven ecommerce sites.

This post contains related quotes from Matt Cutts, head of Google’s web spam team, and Google’s quality guidelines. I guess that piece should bring my point home:

If you’re keen on search engine traffic then do not deliver one page to the crawlers and another page to users. Redirecting to another URL which serves the same contents client sided gives Google an idea of intent, but honest intent is not a permission to cloak. Google says JS redirects are against the guidelines, so don’t cloak. It’s that simple.

If you’ve questions, post a comment on my blog or drop me a line. Thanks for listening

Sebastian

Next the links to this blog were edited out and Erol posted a longish but pointless charade. Click the link to read it in full, summarizing it tells the worried Erol victims that Google has no clue at all, frames and JS redirects are great for online shops, and waiting for the next software release providing meaningful URLs will fix everything. Ok, that’s polemic, so here are at least a few quotes:

[…] A number of people have been asking for a little reassurance on the fact that EROL’s x.html pages are getting listed by Google. Below is a list of keyword phrases, with the number of competing pages and the x.html page that gets listed [4 examples provided].
[…]
EROL does use frames to display the store in the browser, however all the individual pages generated and uploaded by EROL are static HTML pages (x.html pages) that can be optimised for search engines. These pages are spidered and indexed by the search engines. Each of these x.html pages have a redirect that loads the page into the store frameset automatically when the page is requested.
[…]
EROL is a JavaScript shopping cart, however all the links within the store (links to other EROL pages) that are added using EROL Link Items are written to the static HTML pages as a standard <a href=”"> links - not a JavaScript link. This helps the search engines spider other pages in your store.

The ’sneaky re-directs’ being discussed most likely relate to an older SEO technique used by some companies to auto-forward from an SEO-optimised page/URL to the actual URL the site-owner wants you to see.

EROL doesn’t do this - EROL’s page load actually works more like an include than the redirect mentioned above. In its raw form, the ‘x123.html’ page carries visible content, readable by the search engines. In it’s rendered form, the page loads the same content but the JavaScript rewrites the rendered page to include page and product layout attributes and to load the frameset. You are never redirected to another html page or URL. [Not true, the JS function displayPage() changes the location of all pages indexed by Google, and property names like ‘hidepage’ speak for themselves. Example: x999.html redirects to erol.html#999×0&&]
[…]
We have, for the past 6 months, been working with search engine optimisation experts to help update the code that EROL writes to the web page, making it even more search engine friendly.

As part of the recommendations suggested by the SEO experts, pages names will become more search engine friendly, moving way from page names such as ‘x123.hml’ to ‘my-product-page-123.html’. […]

Still in friendly and helpful mood I wrote a reply:

With all respect, if I understand your post correctly that’s not going to solve the problem.

As long as a crawlable URL like http://www.example.com/x123.html or http://www.example.com/product-name-123.html resolves to
http://www.example.com/erol.html#123×0&& or whatever that’s a violation of Google’s quality guidelines. Whether you call that redirect sneaky (Google’s language) or not that’s not the point. It’s Google’s search engine, so their rules apply. These rules state clearly that pages which do a JS redirect to another URL (on the same server or not, delivering the same contents or not) do not get indexed, or, if discovered later on, get deindexed.

The fact that many x-pages are still indexed and may even rank for their targeted keywords means nothing. Google cannot discover and delist all pages utilizing a particular disliked technique overnight, and never has. Sometimes that’s a process lasting months or even years.

The problem is, that these redirects put your customers at risk. Again, Google didn’t change its Webmaster guidelines which forbid JS redirects since the stone age, it has recently changed its ability to discover violations in the search index. Google does frequently improve its algos, so please don’t expect to get away with it. Quite the opposite, expect each and every page with these redirects vanishing over the years.

A good approach to avoid Google’s cloaking penalties is utilizing one single URL as spider fodder as well as content presentation to browsers. When a Googler loads such a page with a browser and compares the URL to the spidered one, you get away with nearly everything CSS and JS can accomplish — as long as the URLs are identical. If OTOH the JS code changes the location you’re toast.

Posting this response failed, because Erol’s forum admin banned me after censoring my previous post. By the way according to posts outside their sphere and from what I’ve seen watching the discussion they censor posts of customers too. Well, that’s fine with me since that’s Erol’s forum and they make the rules. However, still eager to help I emailed my reply to Erol, and to Erol customers asking for my take on Erol’s final statement.

You ask why I post this long winded stuff? Well, it seems to me that Erol prefers a few fast bucks over satisfied customers, thus I fear they will not tell their cutomers the truth. Actually, they simply don’t get it. However, I don’t care whether their intention to prevaricate is greed or ignorance, I really don’t know, but all the store operators suffering from Google’s penalties deserve the information. A few of them have subscribed to my feed, so I hope my message gets spread. Continuation

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Good Bye Nofollow: How to DOfollow comments with blogger

Andy Beard pointed me to a neat procedure to DOFOLLOW links in blog comments with blogger.com: Remove Nofollow Attribute on Blogger.com Blog Comments:

Edit the template’s HTML and remove “rel=’nofollow’” in this line:
<a expr:href='data:comment.authorUrl' rel='nofollow'><data:comment.author/></a>

Now I’ve a good reason to upgrade the software. Sadly I’ve hacked the template so badly, I doubt it will work with the new version :(



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3  Next Page »