Archived posts from the 'Testing' Category

How to spam the hell out of Google’s new source attribution meta elements

The moment you’ve read Google’s announcement and Matt’s question “What about spam?” you concluded “spamming it is a breeze”, right? You’re not alone.

Before we discuss how to abuse it, it might be a good idea to define it within its context, ok?


First of all, Google announced these meta tags on the official Google News blog  for a reason. So when you plan to abuse it with your countless MFA proxies of Yahoo Answers, you most probably jumped on the wrong band wagon. Google supports the meta elements below in Google News only.


The first new indexer hint is syndication-source. It’s meant to tell Google the permalink of a particular news story, hence the author and all the folks spreading the word are asked to use it to point to the one –and only one– URI considered the source:

<meta name="syndication-source" content="" />

The meta element above is for instances of the story served from

Don’t confuse it with the cross-domain rel-canonical link element. It’s not about canning duplicate content, it marks a particular story, regardless whether it’s somewhat rewritten or just reprinted with a different headline. It tells Google News to use the original URI when the story can be crawled from different URIs on the author’s server, and when syndicated stories on other servers are so similar to the initial piece that Google News prefers to use the original (the latter is my educated guess).


The second new indexer hint is original-source. It’s meant to tell Google the origin of the news itself, so the author/enterprise digging it out of the mud, as well as all the folks using it later on, are asked to declare who broke the story:

<meta name="original-source" content="" />

Say we’ve got two or more related news, like “Google fell from Mars” by and “Google landed in Mountain View” by, it makes sense for to publish a piece like “Google fell from Mars and landed in Mountain View”. Because is a serious newspaper, they credit their sources not only with a mention or even embedded links, they do it machine-readable, too:

<meta name="original-source" content="" />
<meta name="original-source" content="" />

It’s a matter of course that both and provide such an original-source meta element on their pages, in addition to the syndication-source meta element, both pointing to their very own coverage.

If a journalist grabbed his breaking news from a secondary source telling “CNN reported five minutes ago that Google’s mothership started from Venus, and the LA Times spotted it crashing on Jupiter”, he can’t be bothered with looking at the markup and locating those meta elements in the head section, he has a deadline for his piece “Why Web search left Planet Earth”. It’s just fine with Google News when he puts

<meta name="original-source" content="" />
<meta name="original-source" content="" />


As always, the most interesting stuff is hidden on a help page:

At this time, Google News will not make any changes to article ranking based on this tags.

If we detect that a site is using these metatags inaccurately (e.g., only to promote their own content), we’ll reduce the importance we assign to their metatags. And, as always, we reserve the right to remove a site from Google News if, for example, we determine it to be spammy.

As with any other publisher-supplied metadata, we will be taking steps to ensure the integrity and reliability of this information.

It’s a field test

We think it is a promising method for detecting originality among a diverse set of news articles, but we won’t know for sure until we’ve seen a lot of data. By releasing this tag, we’re asking publishers to participate in an experiment that we hope will improve Google News and, ultimately, online journalism. […] Eventually, if we believe they prove useful, these tags will be incorporated among the many other signals that go into ranking and grouping articles in Google News. For now, syndication-source will only be used to distinguish among groups of duplicate identical articles, while original-source is only being studied and will not factor into ranking. [emphasis mine]

Spam potential

Well, we do know that Google Web search has a spam problem, IOW even a few so-1999-webspam-tactics still work to some extent. So we tend to classify a vague threat like “If we find sites abusing these tags, we may […] remove [those] from Google News entirely” as FUD, and spam away. Common sense and experience tells us that a smart marketer will make money from everything spammable.

But: we’re not talking about Web search. Google News is a clearly laid out environment. There are only so many sites covered by Google News. Even if Google wouldn’t be able to develop algos analyzing all source attribution attributes out there, they do have the resources to identify abuse using manpower alone. Most probably they will do both.

They clearly told us that they will compare those meta data to other signals. And that’s not only very weak indicators like “timestamp first crawled” or “first heard of via pubsubhubbub”. It’s not that hard to isolate particular news, gather each occurrence as well as source mentions within, and arrange those on a time line with clickable links for QC folks who most certainly will identify the actual source. Even a few spot tests daily will soon reveal the sites whose source attribution meta tags are questionable, or even spammy.

If you’re still not convinced, fair enough. Go spam away. Once you’ve lost your entry on the whitelist, your free traffic from Google News, as well as from news-one-box results on conventional SERPs, is toast.

Last but not least, a fair warning

Now, if you still want to use source attribution meta elements on your non-newsworthy MFA sites to claim owership of your scraped content, feel free to do so. Most probably Matt’s team will appreciate just another “I’m spamming Google” signal.

Not that reprinting scraped content is considered shady any more: even a former president does it shamelessly. It’s just the almighty Google in all of its evilness that penalizes you for considering all on-line content public domain.

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

Nofollow still means don’t follow, and how to instruct Google to crawl nofollow’ed links nevertheless

painting a nofollow'ed link dofollowWhat was meant as a quick test of rel-nofollow once again (inspired by Michelle’s post stating that nofollow’ed comment author links result in rankings), turned out to some interesting observations:

  • Google uses sneaky JavaScript links (that mask nofollow’ed static links) for discovery crawling, and indexes the link destinations despite there’s no hard coded link on any page on the whole Web.
  • Google doesn’t crawl URIs found in nofollow’ed links only.
  • Google most probably doesn’t use anchor text outputted client sided in rankings for the page that carries the JavaScript link.
  • Google most probably doesn’t pass anchor text of JavaScript links to the link destination.
  • Google doesn’t pass anchor text of (hard coded) nofollow’ed links to the link destination.

As for my inspiration, I guess not all links in Michelle’s test were truly nofollow’ed. However, she’s spot on stating that condomized author links aren’t useless because they bring in traffic, and can result in clean links when a reader copies the URI from the comment author link and drops it elsewhere. Don’t pay too much attention on REL attributes when you spread your links.

As for my quick test explained below, please consider it an inspiration too. It’s not a full blown SEO test, because I’ve checked one single scenario for a short period of time. However, looking at its results within 24 hours after uploading the test only, makes quite sure that the test isn’t influenced by external noise, for example scraped links and such stuff.

On 2008-02-22 06:20:00 I’ve put a new nofollow’ed link onto my sidebar: Zilchish Crap
<a href="" id="repstuff-something-a" rel="nofollow"><span id="repstuff-something-b">Zilchish Crap</span></a>
<script type="text/javascript">
handle=document.getElementById(‘repstuff-something-b’);‘Nillified, Nil’;

(The JavaScript code changes the link’s HREF, REL and anchor text.)

The purpose of the JavaScript crap was to mask the anchor text, fool CSS that highlights nofollow’ed links (to avoid clean links to the test URI during the test), and to separate requests from crawlers and humans with different URIs.

Google crawls URIs extracted from somewhat sneaky JavaScript code

20 minutes later Googlebot requested the ?nil=js1 URI from the JavaScript code and totally ignored the hard coded URI in the A element’s HREF: 2008-02-22 06:47:07 200-OK Mozilla/5.0 (compatible; Googlebot/2.1; + /repstuff/something.php?nil=js1

Roughly three hours after this visit Googlebot fetched an URI provided only in JS code on the test page:

From the log: 2008-02-22 09:37:11 200-OK Mozilla/5.0 (compatible; Googlebot/2.1; + /repstuff/something.php?nil=js2

So far Google ignored the hidden JavaScript link to /repstuff/something.php?nil=js3 on the test page. Its code doesn’t change a static link, so that makes sense in the context of repeated statements like “Google ignores JavaScript links / treats them like nofollow’ed links” by Google reps.

Of course the JS code above is easy to analyze, but don’t think that you can fool Google with concatenated strings, external JS files or encoded JavaScript statements!

Google indexes pages that have only JavaScript links pointing to them

The next day I’ve checked the search index, and the results are interesting:

rel-nofollow-test search results

The first search result is the content of the URI with the query string parameter ?nil=js1, which is outputted with a JavaScript statement on my sidebar, masking the hard coded URI /repstuff/something.php without query string. There’s not a single real link to this URI elsewhere.

The second search result is a post URI where Google recognized the hard coded anchor text “zilchish crap”, but not the JS code that overwrites it with “Nillified, Nil”. With the SERP-URI parameter “&filter=0″ Google shows more posts that are findable with the search term [zilchish]. (Hey Matt and Brian, here’s room for improvement!)

Google doesn’t pass anchor text of nofollow’ed links to the link destination

A search for [zilchish] doesn’t show the testpage that doesn’t carry this term. In other words, so far the anchor text “zilchish crap” of the nofollow’ed sidebar link didn’t impact the test page’s rankings yet.

Google doesn’t treat anchor text of JavaScript links as textual content

A search for [nillified] doesn’t show any URIs that have “nil, nillified” as client sided anchor text on the sidebar, just the test page:

rel-nofollow-test search results

Results, conclusions, speculation

This test wasn’t intended to evaluate whether JS outputted anchor text gets passed to the link destination or not. Unfortunately “nil” and “nillified” appear both in the JS anchor text as well as on the page, so that’s for another post. However, it seems the JS anchor text isn’t indexed for the pages carrying the JS code, at least they don’t appear in search results for the JS anchor text, so most likely it will not be assigned to the link destination’s relevancy for “nil” or “nillified” as well.

Maybe Google’s algos dealing with client sided outputs need more than 24 hours to assign JS anchor text to link destinations; time will tell if nobody ruins my experiment with links, and that includes unavoidable scraping and its sometimes undetectable links that Google knows but never shows.

However, Google can assign static anchor text pretty fast (within less than 24 hours after link discovery), so I’m quite confident that condomized links still don’t pass reputation, nor topically relevance. My test page is unfindable for the nofollow’ed [zilchish crap]. If that changes later on, that will be the result of other factors, for example scraped pages that link without condom.

How to safely strip a link condom

And what’s the actual “news”? Well, say you’ve links that you must condomize because they’re paid or whatever, but you want that Google discovers the link destinations nevertheless. To accomplish that, just output a nofollow’ed link server sided, and change it to a clean link with JavaScript. Google told us for ages that JS links don’t count, so that’s perfectly in line with Google’s guidelines. And if you keep your anchor text as well as URI, title text and such identical, you don’t cloak with deceitful intent. Other search engines might even pass reputation and relevance based on the client sided version of the link. Isn’t that neat?

Link condoms with juicy taste faking good karma

Of course you can use the JS trick without SEO in mind too. E.g. to prettify your condomized ads and paid links. If a visitor uses CSS to highlight nofollow, they look plain ugly otherwise.

Here is how you can do this for a complete Web page. This link is nofollow’ed. The JavaScript code below changed its REL value to “dofollow”. When you put this code at the bottom of your pages, it will un-condomize all your nofollow’ed links.
<script type="text/javascript">
if (document.getElementsByTagName) {
var aElements = document.getElementsByTagName("a");
for (var i=0; i<aElements.length; i++) {
var relvalue = aElements[i].rel.toUpperCase();
if (relvalue.match("NOFOLLOW") != "null") {
aElements[i].rel = "dofollow";

(You’ll find still condomized links on this page. That’s because the JavaScript routine above changes only links placed above it.)

When you add JavaScript routines like that to your pages, you’ll increase their page loading time. IOW you slow them down. Also, you should add a note to your linking policy to avoid confused advertisers who chase toolbar PageRank.

Updates: Obviously Google distrusts me, how come? Four days after the link discovery the search quality archangel requested the nofollow’ed URI –without query string– possibly to check whether I serve different stuff to bots and people. As if I’d cloak, laughable. (Or an assclown linked the URI without condom.)
Day five: Google’s crawler requested the URI from the totally hidden JavaScript link at the bottom of the test page. Did I hear Google reps stating quite often they aren’t interested in client-sided links at all?

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

The hacker tool MSN-LiveSearch is responsible for brute force attacks

401 = Private Property, keep out!A while ago I’ve staged a public SEO contest, asking whether the 401 HTTP response code prevents from search engine indexing or not.

Password protected site areas should be safe from indexing, because legit search engine crawlers do not submit user/password combos. Hence their try to fetch a password protected URL bounces with a 401 HTTP response code that translates to a polite “Authorization Required”, meaning “Forbidden unless you provide valid authorization”.

Experience of life and common sense tell search engines, that when a Webmaster protects content with a user/password query, this content is not available to the public. Search engines that respect Webmasters/site owners do not point their users to protected content.

Also, that makes no sense for the search engine. Searchers submitting a query with keywords that match a protected URL would be pissed when they click the promising search result on the SERP, but the linked site responds with an unfriendly “Enter user and password in order to access [title of the protected area]”, that resolves to a harsh error message because the searcher can’t provide such information, and usually can’t even sign up from the 401 error page1.

Evil use of search resultsUnfortunately, search results that contain URLs of password protected content are valuable tools for hackers. Many content management systems and payment processors that Webmasters use to protect and monetize their contents leave footprints in URLs, for example /members/. Even when those systems can handle individual URLs, many Webmasters leave default URLs in place that are either guessable or well known on the Web.

Developing a script that searches for a string like /members/ in URLs and then “tests” the search results with brute force attacks is a breeze. Also, such scripts are available (for a few bucks or even free) at various places. Without the help of a search engine that provides the lists of protected URLs, the hacker’s job is way more complicated. In other words, search engines that list protected URLs on their SERPs willingly support and encourage hacking, content theft, and DOS-like server attacks.

Ok, lets look at the test results. All search engines have casted their votes now. Here are the winners:

Google :)

Once my test was out, Matt Cutts from Google researched the question and told me:

My belief from talking to folks at Google is that 401/forbidden URLs that we crawl won’t be indexed even as a reference, so .htacess password-protected directories shouldn’t get indexed as long as we crawl enough to discover the 401. Of course, if we discover an URL but didn’t crawl it to see the 401/Forbidden status, that URL reference could still show up in Google.

Well, that’s exactly the expected behavior, and I wasn’t surprised that my test results confirm Matt’s statement. Thanks to Google’s BlitzIndexing™ Ms. Googlebot spotted the 401 so fast, that the URL never showed up on Google’s SERPs. Google reports the protected URL in my Webmaster Console account for this blog as not indexable.

Yahoo :)

Yahoo’s crawler Slurp also fetched the protected URL in no time, and Yahoo did the right thing too. I wonder whether or not that’s going to change if M$ buys Yahoo.

Ask :)

Ask’s crawler isn’t the most diligent Web robot out there. However, somehow Ask has managed not to index a reference to my password protected URL.

And here is the ultimate loser:

MSN LiveSearch :(

Oh well. Obviously MSN LiveSearch is a must have in a deceitful cracker’s toolbox:

MSN LiveSearch indexes password protected URLs

As if indexing references to password protected URLs wouldn’t be crappy enough, MSN even indexes sitemap files that are referenced in robots.txt only. Sitemaps are machine readable URL submission files that have absolute no value for humans. Webmasters make use of sitemap files to mass submit their URLs to search engines. The sitemap protocol, that MSN officially supports, defines a communication channel between Webmasters and search engines - not searchers, and especially not scrapers that can use indexed sitemaps to steal Web contents more easily. Here is a screen shot of an MSN SERP:

MSN LiveSearch indexes unlinked sitemaps files (MSN SERP)
MSN LiveSearch indexes unlinked sitemaps files (MSN Webmaster Tools)

All the other search engines got the sitemap submission of the test URL too, but none of them fell for it. Neither Google, Yahoo, nor Ask have indexed the sitemap file (they never index submitted sitemaps that have no inbound links by the way) or its protected URL.


All major search engines except MSN respect the 401 barrier.

Since MSN LiveSearch is well known for spamming, it’s not a big surprise that they support hackers, scrapers and other content thieves.

Of course MSN search is still an experiment, operating in a not yet ready to launch stage, and the big players made their mistakes in the beginning too. But MSN has a history of ignoring Web standards as well as Webmaster concerns. It took them two years to implement the pretty simple sitemaps protocol, they still can’t handle 301 redirects, their sneaky stealth bots spam the referrer logs of all Web sites out there in order to fake human traffic from MSN SERPs (MSN traffic doesn’t exist in most niches), and so on. Once pointed to such crap, they don’t even fix the simplest bugs in a timely manner. I mean, not complying to the HTTP 1.1 protocol from the last century is an evidence of incapacity, and that’s just one example.


Update Feb/06/2008: Last night I’ve received an email from Microsoft confirming the 401 issue. The MSN Live Search engineer said they are currently working on a fix, and he provided me with an email address to report possible further issues. Thank you, Nathan Buggia! I’m still curious how MSN Live Search will handle sitemap files in the future.


1 Smart Webmasters provide sign up as well as login functionality on the page referenced as ErrorDocument 401, but the majority of all failed logins leave the user alone with the short hard coded 401 message that Apache outputs if there’s no 401 error document. Please note that you shouldn’t use a PHP script as 401 error page, because this might disable the user/password prompt (due to a PHP bug). With a static 401 error page that fires up on invalid user/pass entries or a hit on the cancel button, you can perform a meta refresh to redirect the visitor to a signup page. Bear in mind that in .htaccess you must not use absolute URLs (http://… or https://…) in the ErrorDocument 401 directive, and that on the error page you must use absolute URLs for CSS, images, links and whatnot because relative URIs don’t work there!

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

Comment rating and filtering with SezWho

I’ve added SezWho to the comment area. SezWho enables rating and filtering of comments, and shows you even comments an author has left on other blogs. Neat.

Currently there are no ratings, so the existing comments are all rated 2.5 (quite good). Once you’ve rated a few comments, you can suppress all lower quality comments (rated below 3), or show high quality comments only (rated 4 or better).

Don’t freak out when you use CSS that highlights nofollow’ed links. SezWho manipulates the original (mostly dofollow’ed) author link with JavaScript, hence search engines still recognize that a link shall pass PageRank and anchor text. (I condomize some link drops, for example when I don’t know a site and can’t afford the time to check it out, see my comments policy.)

I’ll ask SezWho to change that when I’m more familiar with their system (I hate change requests based on a first peek). SezWho should look at the attributes of the original link in order to add rel=”nofollow” to the JS created version only when the blogger actually has condomized a particular link. Their software changes the comment author URL to a JS script that redirects visitors to the URL the commenter has submitted. It would be nice to show the original URL in the status bar on mouse over.

Also, it seems that when you sign up with SezWho, they remove the trailing slash from your blog’s URL. That’s not acceptable. I mean not every startup should do what clueless Yahoo developers still do although they know that it violates several Web standards. Removing trailing slashes from links is not cool, that’s a crappy manipulation that can harm search engine rankings, will lead to bandwidth theft when bots follow castrated links only to get redirected, … ok, ok, ok … that’s stuff for another post rant. Judging from their Web site, SezWho looks like a decent operation, so I’m sure they can change that too.


SezWho sidebar widget

I’ve not yet added the widgets, above is how they would appear in the sidebar.


I consider SezWho useful. All functionality lives in the blog and can access the blog’s database, so in theory it doesn’t slow down the page load time by pulling loads of data from 3rd party sources. Please let me know whether you like it or not. Thanks!

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

Do search engines index references to password protected smut?

how prudish are search enginesRecently Matt Cutts said that Google doesn’t index password protected content. I wasn’t sure whether or not that goes for all search engines. I thought that they might index at least references to protected URLs, like they all do with other uncrawlable content that has strong inbound links.

Well, SEO tests are dull and boring, so I thought I could have some fun with this one.

I’ve joked that I should use someone’s favorite smut collection to test it. Unfortunately, nobody was willing to trade porn passwords for link love or so. I’m not a hacker, hence I’ve created my own tiny collection of password protected SEO porn (this link is not exactly considered safe at work) as test case.

I was quite astonished that according to this post about SEO porn next to nobody in the SEOsphere optimizes adult sites (of course that’s not true). From the comments I figured that some folks at least surf for SEO porn evaluate the optimization techniques applied by adult Webmasters.

Ok, lets extend that. Out yourself as SEO porn savvy Internet marketer. Leave your email addy in the comments (dont forget to tell me why I should believe that you’re over 18), and I’ll email you the super secret password for my SEO porn members area (!SAW). Trust me, it’s worth it, and perfectly legit due to the strictly scientific character of this experiment. If you’re somewhat shy, use a funny pseudonym.

I’d very much appreciate a little help with linkage too. Feel free to link to with an adequate anchor text of your choice, and of course without condom.

Get the finest SEO porn available on this planet!

I’ve got the password, now let me in!

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

Validate your robots.txt - Googlebot becomes smarter

Validate your robots.txt!Last week I reported that Google experiments with new crawler directives for use in robots.txt. Today Google has confirmed that Googlebot understands experimental REP syntax like Noindex:.

That means that forgotten –and, until recently, ignored– statements in your robots.txt might change the crawler’s behavior all of a sudden, without notice. I don’t know for sure which experimental crawler directives Google has implemented, but for example a line like
Noindex: /
in your robots.txt will now deindex your complete Web site.

“Noindex:” is not defined in the Robots Exclusion Protocol from 1994, and not mentioned in Google’s official documents.

John Müller from Google Zürich states:

At the moment we will usually accept the “noindex” directive in the robots.txt, but we are not yet at a point where we are willing to set it into stone and announce full support.

[…] I just want to remind everyone again that this is something that may still change over time. Be careful when playing with things like this.

My understanding of “be careful” is:

  • Create a separate section for Googlebot. Do not rely on directives addressing all Web robots. Especially when you’ve a Googlebot section already, Google’s crawler will ignore directives set under “all user agents” and process only the Googlebot section. Repeat all statements under User-agent: * in User-agent: Googlebot to make sure that Googlebot obeys them.
  • RTFM
  • Do not use other crawler directives than
    in the Googlebot section.
  • Don’t mess-up pattern matching.
    * matches a sequence of characters
    $ specifies the end of the URL
    ? separates the path from the query string, you can’t use it as wildcard!
  • Validate your robots.txt with the cool robots.txt analyzer in your Google Webmaster Console.

Folks put the funniest stuff into their robots.txt, for example images or crawl delays like “Don’t crawl this site during our office hours”. Crawler directives from robots meta tags aren’t very popular, but they appear in many robots.txt files. Hence it makes sound sense to use what people express, regardless the syntax errors.

Also, having the opportunity to manage page specific crawler directives like “noindex”, “nofollow”, “noarchive” and perhaps even “nopreview” on site level is a huge time saver, and eliminates many points of failure. Kudos to Google for this initiative, I hope it will make it into the standards.

I’ll test the experimental robots.txt directives and post the results. Perhaps I’ll set up a live test like this one.

Take care.

Update: Here is the live test of suspected respectively desired new crawler directives for robots.txt. I’ve added a few unusual statements to my robots.txt and uploaded scripts to monitor search engine crawling. The test pages provide links to search queries so you can check whether Google indexed them or not.

Please don’t link to the crawler traps, I’ll update this post with my findings. Of course I appreciate links, so here is the canonical URL:

Please note that you should not make use of the crawler directives below on production systems! Bear in mind that you can achive all that with simple X-Robots-Tags in the HTTP headers. That’s a bullet-proof way to apply robots meta tags to files without touching them, and it works with virtual URIs too. X-Robots-Tags are sexy, but many site owners can’t handle them due to various reasons, whereas corresponding robots.txt syntax would be usable for everybody (not suffering from restrictive and/or free hosts).


Noindex: /repstuff/noindex.php

Expected behavior:
No crawling/indexing. It seems Google interprets “Nofollow:” as “Disallow:”.
Desired behavior:
“Follow:” is the REP’s default, hence Google should fetch everything and follow the outgoing links, but shouldn’t deliver Noindex’ed contents on the SERPs, not even as URL-only listings.
Google’s robots.txt validator: Blocked by line 30: Noindex: /repstuff/noindex.php
See test page
Google’s crawler / indexer:
2007-11-21: crawled (possibly caused by an outdated robots.txt cache).
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noindex.php.
2007-11-23: indexed and cached a page linked only from noindex.php.
(If an outdated robots.txt cache falsely allowed crawling, the search result(s) should disappear shortly after the next crawl.)
2007-11-26: deindexed, the same goes for the linked page (without recrawling).
2007-12-07: appeared under “URLs restricted by robots.txt” in GWC.
2007-12-17: I consider this case closed. Noindex: blocks crawling, deindexes previously indexed pages, and is suspected to block incoming PageRank.


Nofollow: /repstuff/nofollow.php

Expected behavior:
Crawling, indexing, and following the links as if there’s no “Nofollow:”.
Desired behavior:
Crawling, indexing, and ignoring outgoing links.
Google’s robots.txt validator:
Line 31: Nofollow: /repstuff/nofollow.php Syntax not understood Allowed
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from nofollow.php (21 Nov 2007 23:19:37 GMT, for some reason not logged properly).
2007-11-23: indexed and cached a page linked only from nofollow.php.
2007-11-26: recrawled, deindexed, no longer cached. The same goes for the linked page.
2007-11-28: cached again, the timestamp on the cached copy “27 Nov 2007 01:11:12 GMT” doesn’t match the last crawl on “2007-11-26 16:47:11 EST” (EST = GMT-5).
2007-12-07: recrawled, still deindexed, cached. Linked page recrawled, cached.
2007-12-17: recrawled, still deindexed (probably caused by near duplicate content on noarchive.php and other pages involved in this test), cached copy dated 2007-12-07. Cache of the linked page still dated 2007-11-21. I consider this case closed. Nofollow: doesn’t work as expected, Google doesn’t support this statement.


Noarchive: /repstuff/noarchive.php

Expected behavior:
Crawling, indexing, following links, but no “Cached” links on the SERPs and no access to cached copies from the toolbar.
Desired behavior:
Crawling, indexing, following links, but no “Cached” links on the SERPs and no access to cached copies from the toolbar.
Google’s robots.txt validator: Allowed
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noarchive.php.
2007-11-23: indexed and cached a page linked only from noarchive.php.
2007-11-26: recrawled, deindexed, no longer cached. The linked page was deindexed without recrawling.
2007-11-28: cached again, the timestamp on the cached copy “27 Nov 2007 01:11:19 GMT” doesn’t match the last crawl on “2007-11-26 16:47:18 EST” (EST = GMT-5).
2007-11-29: recrawled, cache not yet updated.
2007-12-07: recrawled. Linked page recrawled.
2007-12-08: recrawled.
2007-12-11: recrawled the linked page, which is cached but not indexed.
2007-12-12: recrawled.
2007-12-17: still indexed, cached copy dated 2007-12-08. I consider this case closed. Noarchive: doesn’t work as expected, actually it does nothing although according to the robots.txt validator that’s supported –or at least known and accepted– syntax.

(It looks like Google understands Nosnippet: too, but I didn’t test that.)


Nopreview: /repstuff/nopreview.pdf

Expected behavior:
None, unfortunately.
Desired behavior:
No “view as HTML” links on the SERPs. Neither “nosnippet” nor “noarchive” suppress these helpful preview links, which can be pretty annoying in some cases. See NOPREVIEW: The missing X-Robots-Tag.
Google’s robots.txt validator:
Line 33: Nopreview: /repstuff/nopreview.pdf Syntax not understood Allowed
Crawler requests of nopreview.pdf are logged here.
Google’s crawler / indexer:
2007-11-21: crawled the nopreview-pdf and the log page nopreview.php.
2007-11-23: indexed and cached the log file nopreview.php.
[2007-11-23: I replaced the PDF document with a version carrying a hidden link to an HTML file, and resubmitted it via Google’s add-url page and a sitemap.]
2007-11-26: The old version of the PDF is cached as a “view-as-HTML” version without links (considering the PDF was a captured print job, that’s a pretty decent result), and appears on SERPs for a quoted search. The page linked from the PDF and the new PDF document were not yet crawled.
2007-12-02: PDF recrawled. Googlebot followed the hidden link in the PDF and crawled the linked page.
2007-12-03: “View as HTML” preview not yet updated, the linked page not yet indexed.
2007-12-04: PDF recrawled. The preview link reflects the content crawled on 12/02/2007. The page linked from the PDF is not yet indexed.
2007-12-07: PDF recrawled. Linked page recrawled.
2007-12-09: PDF recrawled.
2007-12-10: recrawled linked page.
2007-12-14: PDF recrawled. Cached copy of the linked page dated 2007-12-11.
2007-12-17: I consider this case closed. Neither Nopreview: nor Noarchive: (in robots.txt since 2007-12-04) are suitable to suppress the HTML preview of PDF files.

Noindex: Nofollow:

Noindex: /repstuff/noindex-nofollow.php
Nofollow: /repstuff/noindex-nofollow.php

Expected behavior:
No crawling/indexing, invisible on SERPs.
Desired behavior:
No crawling/indexing, and no URL-only listings, ODP titles/descriptions and stuff like that on the SERPs. “Noindex:” in combination with “Nofollow:” is a paraphrased “Disallow:”.
Google’s robots.txt validator: Blocked by line 35: Noindex: /repstuff/noindex-nofollow.php
Line 36: Nofollow: /repstuff/noindex-nofollow.php Syntax not understood
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noindex-nofollow.php.
2007-11-23: indexed and cached a page linked only from noindex-nofollow.php.
2007-11-26: deindexed without recrawling, the same goes for the linked page.
2007-11-29: the cached copy retrieved on 11/21 reappeared.
2007-12-08: appeared under “URL restricted by robots.txt” in my GWC acct.
2007-12-17: Case closed, see Noindex: above.

Noindex: Follow:

Noindex: /repstuff/noindex-follow.php
Follow: /repstuff/noindex-follow.php

Expected behavior:
No crawling/indexing, hence unfollowed links.
Desired behavior:
Crawling, following and indexing outgoing links, but no SERP listings.
Google’s robots.txt validator: Blocked by line 38: Noindex: /repstuff/noindex-follow.php
Line 39: Follow: /repstuff/noindex-follow.php Syntax not understood
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noindex-follow.php.
2007-11-23: indexed and cached a page linked only from noindex-follow.php.
2007-11-26: deindexed without recrawling, the same goes for the linked page.
2007-12-08: appeared under “URL restricted by robots.txt” in my GWC acct.
2007-12-17: Case closed, see Noindex: above. Google didn’t crawl respectively deindexed despite the Follow: directive.

Index: Nofollow:

Index: /repstuff/index-nofollow.php
Nofollow: /repstuff/index-nofollow.php

Expected behavior:
Crawling/indexing, following links.
Desired behavior:
Crawling/indexing but ignoring outgoing links.
Google’s robots.txt validator:
Line 41: Index: /repstuff/index-nofollow.php Syntax not understood
Line 42: Nofollow: /repstuff/index-nofollow.php Syntax not understood Allowed
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from from index-nofollow.php.
2007-11-23: indexed and cached a page linked only from from index-nofollow.php.
2007-11-26: recrawled and deindexed. The linked page was deindexed witout recrawling.
2007-11-28: cached again, the timestamp on the cached copy “27 Nov 2007 01:11:26 GMT” doesn’t match the last crawl on “2007-11-26 16:47:25 EST” (EST = GMT-5).
2007-12-02: recrawled, the cached copy has vanished.
2007-12-07: recrawled. Linked page recrawled.
2007-12-08: recrawled.
2007-12-09: recrawled.
2007-12-10: recrawled.
2007-12-17: cached under 2007-12-10, not indexed. Linked page not cached, not indexed. I consider this case closed. Google currently doesn’t support Index: nor Nofollow:.

(I didn’t test Noodp: and Unavaliable_after [RFC 850 formatted timestamp]:, although both directives would make sense in robots.txt too.)

Added the experimental statements to robots.txt.

Linked the test pages. Google crawled all of them, including the pages submitted via links on test pages.

Most (all but the PDF document) URLs appear on search result pages. If an outdated robots.txt cache falsely allowed crawling although the WC-validator said “Blocked”, the search results should disappear shortly after the next crawl. I’ve created a sitemap for all URLs above and submitted it. Although I’ve –for the sake of this experiment– cloaked text as well as links and put white links on white background, luckily there is no “we caught you black hat spammer” message in my Webmaster Console. Googlebot nicely followed the cloaked links and indexed everything.

Google recrawled a few pages (noarchive.php, index-nofollow.php and nofollow.php), then deindexed all of them. Only the PDF document is indexed, and Google created a “view-as-HTML” preview from this captured print job. It seems that Google crawled something from another host than “*”, unfortunately I didn’t log all requests. Probably the deindexing was done by a sneaky bot discovering the simple cloaking. Since the linked URLs are out and 3rd party links to them can’t ruin the experiment any longer, I’ve stopped cloaking and show the same text/links to bots and users (actually, users see one more link but that should be fine with Google). There’s still no “thou shalt not cloak” message in my GWC account. Well, those pages are fairly new, perhaps not fully settled in the search index, so lets see what happens next.

The PDF file as well as the three pages recrawled on 11/26/2007 21:45:00 GMT were reindexed, but the timestamp on the cached copies says “retrieved on 27 Nov 2007 01:15:00 GMT”. Maybe the date/time displayed on cached page copies doesn’t reflect Ms. Googlebot’s “fetched” timestamp, but the time the indexer pulled the page out of the centralized crawl results cache 3.5 hours after crawling.

It seems the “Noarchive:” directive doesn’t work, because noarchive.php was crawled and indexed twice providing a cached page copy. My “Nopreview:” creation isn’t supported either, but maybe Dan Crow’s team picks it up for a future update of their neat X-Robots-Tags (I hope so).

The noindex’ed pages (noindex.php, noindex-nofollow.php and noindex-follow.php) weren’t recrawled and remain deindexed. Interestingly, they don’t appear under “URLs blocked by robots.txt” in my GWC account. Provided the first crawling and indexing on 11/21/2007 was a “mistake” caused by a way too long cached robots.txt, and the second crawl on 11/26/2007 obeyed the “Noindex:” but ignored the (implicit) “Follow:”, it seems that indeed Google interprets “Noindex:” in robots.txt as “Disallow:”. If that is so and if it’s there to stay, they’re going to totally mess up the REP.

<rant> I mean, promoting a rel-nofollow microformat that –at least at launchtime– didn’t share its semantics with the REP’s meta tags nor the –later introduced– X-Robots-Tags was evil bad enough. Ok, meanwhile they’ve corrected this conspiracy flaw by altering the rel-nofollow semantics step by step until “nofollow” in the REL attribute actually means nofollow  and no longer pass no reputation, at least at Google. Other engines still handle rel-nofollow according to the initial and officially still binding standard, and a gazillion Webmasters are confused as hell. In other words only a few search geeks understand what rel-nofollow is all about, but Google jauntily penalizes the great unwashed for not complying to the incomprehensible. By the way, that’s why I code rel="nofollow crap". Standards should be clear and unambiguous. </rant>

If Google really would introduce a “Noindex:” directive in robots.txt that equals “Disallow:”, that would be totally evil. A few sites out there might have an erroneous “Noindex:” statement in their robots.txt that could mean “Disallow:”, and it’s nice that Google tries to do them a favor. Screwing the REP for the sole purpose of complying to syntax errors on the other hand makes no sense. “Noindex” means crawl it, follow its links, but don’t index it. Semantically “Noindex: Nofollow:” equals “Disallow:”, but a “Noindex:” alone implies a “Follow:”, hence crawling is not only allowed but required.

I really hope that we watch an experiment in its early stage, and that Google will do the right thing eventually. Allowing the REP’s page specific crawler directives in robots.txt is a fucking brilliant move, because technically challenged publishers can’t handle the HTTP header’s X-Robots-Tag, and applying those directives to groups of URIs is a great method to steer crawling and indexing not only with static sites.

Dear Google engineers, please consider the nopreview directive too, and implement (no)index, (no)follow, noarchive, nosnippet, noodp/noydir and unavailable_after with the REP’s meaning. And while you’re at it, I want block level instructions in robots.txt too. For example
Area: /products/ DIV.hMenu,TD#bNav,SPAN.inherited "noindex,nofollow"

could instruct crawlers to ignore duplicated properties in product descriptions and the horizontal menu as well as the navigation elements in a table cell with the DOM-ID “bNav” at the very bottom of all pages in /products/,
Area: / A.advertising REL="nofollow"

could condomize all links with the class name “advertising”, and so on.

The pages linked from the test pages still don’t come up in search results, noarchive.php was recrawled and remains cached, the cached copy of noindex-nofollow.php retrieved on 11/21/2007 reappeared (probably a DC roller coaster issue).

Three URLs remain indexed: nopreview.pdf, noarchive.php and noindex-nofollow.php. The cached copies show the content crawled on Nov/21/2007. Everything else is deindexed. That’s not to stay (index roller coaster).
As a side note: the URL from my first noindex-robots.txt test appeared in my GWC account under “URLs restricted by robots.txt (Nov/27/2007)”, three days after the unsuccessful crawl.

A few pages were recrawled, Googlebot followed the hidden link in the PDF file.

In my GWC crawl stats noindex-nofollow.php appeared under “URLs restricted by robots.txt”, but it’s still indexed.

The preview (cache) of nopreview.pdf was updated. Since obviously Nopreview: doesn’t work, I’ve added
Noarchive: /repstuff/nopreview.pdf

to my robots.txt. Lets see whether Google removes the cache respectively the HTML preview or not.

Shortly after the change in robots.txt (Noarchive: /repstuff/nopreview.pdf) Googlebot recrawled the PDF file on 12/04/2007. Today it’s still cached, the HTML preview is still available and linked from SERPs.

Googlebot has recrawled a few pages. Everything except noarchive.php and nopreview.pdf is deindexed.

I consider the test closed, but I’ll keep the test pages up so that you can monitor crawling and indexing yourself. Noindex: is the only directive that somewhat works, but it’s implemented completely wrong and is not acceptable in its current shape.

Interestingly the sitemaps report in my GWC account says that 9 pages from 9 submitted URLs were indexed. Obviously “indexed” means something like “crawled at least once, perhaps indexed, maybe not, so if you want to know that definitively then get your lazy butt to check the SERPs yourself”. How expensive would it be to tell something like “Total URLs in sitemap: 9 | Indexed URLs in sitemap: 2″?

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

Getting the most out of Google’s 404 stats

The 404 reports in Google’s Webmaster Central panel are great to debug your site, but they contain URLs generated by invalid –respectively truncated– URL drops or typos of other Webmasters too. Are you sick of wasting the link love from invalid inbound links, just because you lack a suitable procedure to 301-redirect all these 404 errors to canonical URLs?

Your pain ends here. At least when you’re on a *ix server running Apache with PHP 4+ or 5+ and .htaccess enabled. (If you suffer from IIS go search another hobby.)

I’ve developed a tool which grabs all 404 requests, letting you map a canonical URL to each 404 error. The tool captures and records 404s, and you can add invalid URLs from Google’s 404-reports, if these aren’t recorded (yet) from requests by Ms. Googlebot.

It’s kinda layer between your standard 404 handling and your error page. If a request results in a 404 error, your .htaccess calls the tool instead of the error page. If you’ve assigned a canonical URL to an invalid URL, the tool 301-redirects the request to the canonical URL. Otherwise it sends a 404 header and outputs your standard 404 error page. Google’s 404-probe requests during the Webmaster Tools verification procedure are unredirectable (is this a word?).

Besides 1:1 mappings of invalid URLs to canonical URLs you can assign keywords to canonical URLs. For example you can define that all invalid requests go to /fruit when the requested URI or the HTTP referrer (usually a SERP) contain the strings “apple”, “orange”, “banana” or “strawberry”. If there’s no persistent mapping, these requests get 302-redirected to the guessed canonical URL, thus you should view the redirect log frequently to find invalid URLs which deserve a persistent 301-redirect.

Next there are tons of bogus requests from spambots searching for exploits or whatever, or hotlinkers, resulting in 404 errors, where it makes no sense to maintain URL mappings. Just update an ignore list to make sure those get 301-redirected to or a cruel and scary image hosted on your domain or a free host of your choice.

Everything not matching a persistent redirect rule or an expression ends up in a 404 response, as before, but logged so that you can define a mapping to a canonical URL. Also, you can use this tool when you plan to change (a lot of) URLs, it can 301-redirect the old URL to the new one without adding those to your .htaccess file.

I’ve tested this tool for a while on a couple of smaller sites and I think it can get trained to run smoothly without too many edits once the ignore lists etcetera are up to date, that is matching the site’s requisites. A couple of friends got the script and they will provide useful input. Thanks! If you’d like to join the BETA test drop me a message.

Disclaimer: All data get stored in flat files. With large sites we’d need to change that to a database. The UI sucks, I mean it’s usable but it comes with the browser’s default fonts and all that. IOW the current version is still in the stage of “proof of concept”. But it works just fine ;)

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

Referrer spoofing with PrefBar 3.4.1

Testing browser optimization, search engine friendly user-agent cloaking, referrer based navigation or dynamic landing pages with scripts or by changing the user agent name in the browser’s settings is no fun.

I love PrefBar, a neat FireFox plug-in, which provides me with a pretty useful customizable toolbar. With PrefBar you can switch JavaScript, Flash, colors, images, cookies… on and off with one mouse click, and you can enter a list of user agent names to choose the user agent while browsing.

So I’ve asked Manuel Reimer to create a referrer spoofer widget, and kindly he created it with PrefBar 3.4.1. Thank you Manuel!

To activate referrer spoofing in your PrefBar toolbar install or update Prefbar to 3.4.1, then download the Referer Spoof Menulist 1.0, click “Customize” on the toolbar and import the file. Then click on “Edit” to add all the referrer URLs you need for testing purposes, and enjoy. It works great.

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments