Archived posts from the 'Google' Category

Save bandwidth costs: Dynamic pages can support If-Modified-Since too

Conditional HTTP GET requests make Webmasters and Crawlers happyWhen search engine crawlers burn way too much of your bandwidth, this post is for you. Crawlers sent out by major search engines (Google, Yahoo and MSN/Live Search) support conditional GETs, that means they don’t fetch your pages if those didn’t change since the last crawl.

Of course they must fetch your stuff over and over again for this comparision, if your Web server doesn’t play nice with Web robots, as well as with other user agents that can  cache your pages and other Web objects like images. The protocol your Web server and the requestors use to handle caching is quite simple, but its implementation can become tricky. Here is how it works:

1st request Feb/10/2008 12:00:00

Googlebot requests /some-page.php from your server. Since Google has just discovered your page, there are no unusual request headers, just a plain GET.

You create the page from a database record which was modified on Feb/09/2008 10:00:00. Your server sends Googlebot the full page (5k) with an HTTP header
Date: Sun, 10 Feb 2008 12:00:00 GMT
Last-Modified: Sat, 09 Feb 2008 10:00:00 GMT

(lets assume your server is located in Greenwich, UK), the HTTP response code is 200 (OK).

Bandwidth used: 5 kilobytes for the page contents plus less than 500 bytes for the HTTP header.

2nd request Feb/17/2008 12:00:00

Googlebot found interesting links pointing to your page, so it requests /some-page.php again to check for updates. Since Google already knows the resource, Googlebot requests it with an additional HTTP header
If-Modified-Since: Sat, 09 Feb 2008 10:00:00 GMT

where the date and time is taken from the Last-Modified header you’ve sent in your response to the previous request.

You didn’t change the page’s record in the database, hence there’s no need to send the full page again. Your Web server sends Googlebot just an HTTP header
Date: Sun, 17 Feb 2008 12:00:00 GMT
Last-Modified: Sat, 09 Feb 2008 10:00:00 GMT

The HTTP response code is 304 (Not Modified). (Your Web server can suppress the Last-Modified header, because the requestor has this timestamp already.)

Bandwidth used: Less than 500 bytes for the HTTP header.

3rd request Feb/24/2008 12:00:00

Googlebot can’t resist to recrawl /some-page.php, again using the
If-Modified-Since: Sat, 09 Feb 2008 10:00:00 GMT

header.

You’ve updated the database on Feb/23/2008 09:00:00 adding a few paragraphs to the article, thus you send Googlebot the full page (now 7k) with this HTTP header
Date: Sun, 10 Feb 2008 12:00:00 GMT
Last-Modified: Sat, 23 Feb 2008 09:00:00 GMT

and an HTTP response code 200 (OK).

Bandwidth used: 7 kilobytes for the page contents plus less than 500 bytes for the HTTP header.

Further requests

Provided you don’t change the contents again, all further chats between Googlebot and your Web server regarding /some-page.php will burn less than 500 bytes of your bandwidth each. Say Googlebot requests this page weekly, that’s 370k saved bandwidth annually. You do the math. Even with a medium-sized Web site you most likely want to implement proper caching, right?

Not only Webmasters love conditional GET requests that save bandwidth costs and processing time, search engines aren’t keen on useless data transfers too. So lets see how you could respond efficiently to conditional GET requests from search engines. Apache handles caching of static files (e.g. .txt or .html files you upload with FTP) differently from dynamic contents (script outputs with or without a query string in the URI).

Static files

Fortunately, Apache comes with native support of the Last-Modified / If-Modified-Since / Not-Modified functionality. That means that crawlers and your Web server don’t produce too much network traffic when a requested static file  didn’t change since the last crawl.

You can test your Web server’s conditional GET support with your robots.txt, or, if even your robots.txt is a script, create a tiny HTML page with a text editor and upload it via FTP. Another neat tool to check HTTP headers is the Live Headers Extension for FireFox (bear in mind that testing crawler behavior with Web browsers is fault-prone by design).

If your second request of an unchanged static file results in a 200 HTTP response code, instead of a 304, call your hosting service. If it works and you’ve only static pages, then bookmark this article and move on.

Dynamic contents

Everything you output with server sided scripts is dynamic content by definition, regardless whether the URI has a query string or not. Even if you just read and print out a static file –that never changes– with PHP, Apache doesn’t add the Last-Modified header which forces crawlers to perform further requests with an If-Modified-Since header.

With dynamic content you can’t rely on Apache’s caching support, you must do it yourself.

The first step is figuring out where your CMS or eCommerce software hides the timestamps telling you the date and time of a page’s last modification. Usually a script pulls its stuff from different database tables, hence a page contains more than one area, or block, of dynamic contents. Every block might have a different last-modified timestamp, but not every block is important enough to serve as the page’s determinant last-modified date. The same goes for templates. Most template tweaks shouldn’t trigger a full blown recrawl, but some do, for example a new address or phone number if such information is present on every page.

For example a blog has posts, pages, comments, categories and other data sources that can change the sidebar’s contents quite frequently. On a page that outputs a single post or page, the last-modified date is determined by the post, respectively its last comment. The main page’s last-modified date is the modified-timestamp of the most recent post, and the same goes for its paginated continuations. A category page’s last-modified date is determined by the category’s most recent post, and so on.

New posts can change outgoing links of older posts when you use plugins that list related posts and stuff like that. There are many more reasons why search engines should crawl older posts at least monthly or so. You might need a routine that changes a blog page’s last-modified timestamp for example when it is a date more than 30 days or so in the past. Also, in some cases it could make sense to have a routine that can reset all timestamps reported as last-modified date for particular site areas, or even the whole site.

If your software doesn’t populate last-modified attributes on changes of all entities, then snap at the chance to consider database triggers, stored procedures, respectively changes of your data access layer. Bear in mind that not all changes of a record must trigger a crawler cache reset. For example a table storing textual contents like articles or product descriptions usually has a number of attributes that don’t affect crawling, thus it should have an attribute last updated  that’s changeable in the UI and serves as last-modified date in your crawler cache control (instead of the timestamp that’s changed automatically even on minor updates of attributes which are meaningless for HTML outputs).

Handling Last-Modified, If-Modified-Since, and Not-Modified HTTP headers with PHP/Apache

Below I provide example PHP code I’ve thrown together after midnight in a sleepless night, doped with painkillers. It doesn’t run on a production system, but it should get you started. Adapt it to your needs and make sure you test your stuff intensively. As always, my stuff comes as is  without any guarantees. ;)

First grab a couple helpers and put them in an include file you’ve available in all scripts. Since we deal with HTTP headers, you must not output anything before the logic that deals with conditional search engine requests, not even a single white space character, HTML DOCTYPE declaration …
View|hide PHP code. (If you’ve disabled JavaScript you can’t grab the PHP source code!)

In general, all user agents should support conditional GET requests, not only search engine crawlers. If you allow long lasting caching, which is fine with search engines that don’t need to crawl your latest Twitter message from your blog’s sidebar, you could leave your visitors with somewhat outdated pages if you serve them 304-Not-Modified responses too.

It might be a good idea to limit 304 responses to conditional GET requests from crawlers, when you don’t implement way shorter caching cycles for other user agents. The latter includes folks that spoof their user agent name as well as scrapers trying to steal your stuff masked as a legit spider. To verify legit search engine crawlers that (should) support conditional GET requests (from Google, Yahoo, MSN and Ask) you can grab my crawler detection routines here. Include them as well, then you can code stuff like that:

$isSpiderUA = checkCrawlerUA ();
$isLegitSpider = checkCrawlerIP (__FILE__);
if ($isSpiderUA && !$isLegitSpider) {
@header("Thou shalt not spoof", TRUE, 403);
exit;
// make sure your 403-Forbidden ErrorDocument directive in
// .htaccess points to a page that explains the issue!
}
if ($isLegitSpider) {
// insert your code dealing with conditional GET requests
}

Now that you’re sure that the requestor is a legit crawler from a major search engine, look at the HTTP request header it has submitted to your Web server.

// lookup the HTTP request header for a possible conditional GET
$ifModifiedSinceTimestamp = getIfModifiedSince();
// if the request is not conditional, don’t send a 304
$canSend304 = FALSE;
if ($ifModifiedSinceTimestamp !== FALSE) {
$canSend304 = TRUE;

// Tells the requestor that you’ve recognized the conditional GET
$echoRequestHeader = "X-Requested-If-modified-since: "
.unixTimestamp2HttpDate($ifModifiedSinceTimestamp);
@header($echoRequestHeader, TRUE);
}

You don’t need to echo the If-Modified-Since HTTP-date in the response header, but this custom header makes testing easier.

Next get the page’s actual last-modified date/time. Here is an (incomplete) code sample for a WordPress single post page.

// select the requested post's comment_count, post_modified and
 // post_date values, then:
if ($wp_post_modified) {
$lastModified = date2UnixTimestamp($wp_post_modified);
}
else {
$lastModified = date2UnixTimestamp($wp_post_date);
}
if (intval($wp_comment_count) > 0) {
// select last comment from the WordPress database, then:
$lastCommentTimestamp = date2UnixTimestamp($wp_comment_date);
if ($lastCommentTimestamp > $lastModified) {
$lastModified = $lastCommentTimestamp;
}
}

The date2UnixTimestamp() function accepts MySQL datetime values as valid input. If you need to (re)write last-modified dates to a MySQL database, convert the Unix timestamps to MySQL datetime values with unixTimestamp2MySqlDatetime().

Your server’s clock isn’t necessarily synchronized with all search engines out there. To cover possible gaps you can use a last-modified timestamp that’s a little bit fresher than the actual last-modified date. In this example the timestamp reported to the crawler is last-modified + 13 hours, you can change the deviant in makeLastModifiedTimestamp().
$lastModifiedTimestamp = makeLastModifiedTimestamp($lastModified);

If you compare the timestamps later on, and the request isn’t conditional, don’t run into the 304 routine.
if ($ifModifiedSinceTimestamp === FALSE) {
// make things equal if the request isn't conditional
$ifModifiedSinceTimestamp = $lastModifiedTimestamp;
}

You may want to allow a full fetch if the requestor’s timestamp is ancient, in this example older than one month.
$tooOld = @strtotime("now") - (31 * 24 * 60 * 60);
if ($ifModifiedSinceTimestamp < $tooOld) {
$lastModifiedTimestamp = @strtotime("now");
$ifModifiedSinceTimestamp = @strtotime("now") - (1 * 24 * 60 * 60);
}

Setting the last-modified attribute to yesterday schedules the next full crawl after this fetch in 30 days (or later, depending on the actual crawl frequency).

Finally respond with 304-Not-Modified if the page wasn’t remarkably changed since the date/time given in the crawler’s If-Modified-Since header. Otherwise send a Last-Modified header with a 200 HTTP response code, allowing the crawler to fetch the page contents.
$lastModifiedHeader = "Last-Modified: " .unixTimestamp2HttpDate($lastModifiedTimestamp);
if ($lastModifiedTimestamp < $ifModifiedSinceTimestamp &&
$canSend304) {
@header($lastModifiedHeader, TRUE, 304);
exit;
}
else {
@header($lastModifiedHeader, TRUE);
}

When you’re testing your version of this script with a browser, it will send a standard HTTP request, and your server will return a 200-OK. From your server’s response your browser should recognize the “Last-Modified” header, so when you reload the page the browser should send an “If-Modified-Since” header and you should get the 304 response code if Last-Modified > If-Modified-Since. However, judging from my experience such browser based tests of crawler behavior, respectively responses to crawler requests, aren’t reliable.

Test it with this MS tool instead. I’ve played with it for a while and it works great. With the PHP code above I’ve created a 200/304 test page
http://sebastians-pamphlets.com/tools/last-modified-yesterday.php
that sends a “Last-Modified: Yesterday” response header, and should return a 304-Not Modified HTTP response code when you request it with an “If-Modified-Since: Today+” header, otherwise it should respond with 200-OK (this version returns 200-OK only but tells when it would  respond with a 304). You can use this URI with the MS-tool linked above to test HTTP requests with different If-Modified-Since headers.

Have fun and paypal me 50% of your savings. ;)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

The hacker tool MSN-LiveSearch is responsible for brute force attacks

401 = Private Property, keep out!A while ago I’ve staged a public SEO contest, asking whether the 401 HTTP response code prevents from search engine indexing or not.

Password protected site areas should be safe from indexing, because legit search engine crawlers do not submit user/password combos. Hence their try to fetch a password protected URL bounces with a 401 HTTP response code that translates to a polite “Authorization Required”, meaning “Forbidden unless you provide valid authorization”.

Experience of life and common sense tell search engines, that when a Webmaster protects content with a user/password query, this content is not available to the public. Search engines that respect Webmasters/site owners do not point their users to protected content.

Also, that makes no sense for the search engine. Searchers submitting a query with keywords that match a protected URL would be pissed when they click the promising search result on the SERP, but the linked site responds with an unfriendly “Enter user and password in order to access [title of the protected area]”, that resolves to a harsh error message because the searcher can’t provide such information, and usually can’t even sign up from the 401 error page1.

Evil use of search resultsUnfortunately, search results that contain URLs of password protected content are valuable tools for hackers. Many content management systems and payment processors that Webmasters use to protect and monetize their contents leave footprints in URLs, for example /members/. Even when those systems can handle individual URLs, many Webmasters leave default URLs in place that are either guessable or well known on the Web.

Developing a script that searches for a string like /members/ in URLs and then “tests” the search results with brute force attacks is a breeze. Also, such scripts are available (for a few bucks or even free) at various places. Without the help of a search engine that provides the lists of protected URLs, the hacker’s job is way more complicated. In other words, search engines that list protected URLs on their SERPs willingly support and encourage hacking, content theft, and DOS-like server attacks.

Ok, lets look at the test results. All search engines have casted their votes now. Here are the winners:

Google :)

Once my test was out, Matt Cutts from Google researched the question and told me:

My belief from talking to folks at Google is that 401/forbidden URLs that we crawl won’t be indexed even as a reference, so .htacess password-protected directories shouldn’t get indexed as long as we crawl enough to discover the 401. Of course, if we discover an URL but didn’t crawl it to see the 401/Forbidden status, that URL reference could still show up in Google.

Well, that’s exactly the expected behavior, and I wasn’t surprised that my test results confirm Matt’s statement. Thanks to Google’s BlitzIndexing™ Ms. Googlebot spotted the 401 so fast, that the URL never showed up on Google’s SERPs. Google reports the protected URL in my Webmaster Console account for this blog as not indexable.

Yahoo :)

Yahoo’s crawler Slurp also fetched the protected URL in no time, and Yahoo did the right thing too. I wonder whether or not that’s going to change if M$ buys Yahoo.

Ask :)

Ask’s crawler isn’t the most diligent Web robot out there. However, somehow Ask has managed not to index a reference to my password protected URL.

And here is the ultimate loser:

MSN LiveSearch :(

Oh well. Obviously MSN LiveSearch is a must have in a deceitful cracker’s toolbox:

MSN LiveSearch indexes password protected URLs

As if indexing references to password protected URLs wouldn’t be crappy enough, MSN even indexes sitemap files that are referenced in robots.txt only. Sitemaps are machine readable URL submission files that have absolute no value for humans. Webmasters make use of sitemap files to mass submit their URLs to search engines. The sitemap protocol, that MSN officially supports, defines a communication channel between Webmasters and search engines - not searchers, and especially not scrapers that can use indexed sitemaps to steal Web contents more easily. Here is a screen shot of an MSN SERP:

MSN LiveSearch indexes unlinked sitemaps files (MSN SERP)
MSN LiveSearch indexes unlinked sitemaps files (MSN Webmaster Tools)

All the other search engines got the sitemap submission of the test URL too, but none of them fell for it. Neither Google, Yahoo, nor Ask have indexed the sitemap file (they never index submitted sitemaps that have no inbound links by the way) or its protected URL.

Summary

All major search engines except MSN respect the 401 barrier.

Since MSN LiveSearch is well known for spamming, it’s not a big surprise that they support hackers, scrapers and other content thieves.

Of course MSN search is still an experiment, operating in a not yet ready to launch stage, and the big players made their mistakes in the beginning too. But MSN has a history of ignoring Web standards as well as Webmaster concerns. It took them two years to implement the pretty simple sitemaps protocol, they still can’t handle 301 redirects, their sneaky stealth bots spam the referrer logs of all Web sites out there in order to fake human traffic from MSN SERPs (MSN traffic doesn’t exist in most niches), and so on. Once pointed to such crap, they don’t even fix the simplest bugs in a timely manner. I mean, not complying to the HTTP 1.1 protocol from the last century is an evidence of incapacity, and that’s just one example.

 

Update Feb/06/2008: Last night I’ve received an email from Microsoft confirming the 401 issue. The MSN Live Search engineer said they are currently working on a fix, and he provided me with an email address to report possible further issues. Thank you, Nathan Buggia! I’m still curious how MSN Live Search will handle sitemap files in the future.

 


1 Smart Webmasters provide sign up as well as login functionality on the page referenced as ErrorDocument 401, but the majority of all failed logins leave the user alone with the short hard coded 401 message that Apache outputs if there’s no 401 error document. Please note that you shouldn’t use a PHP script as 401 error page, because this might disable the user/password prompt (due to a PHP bug). With a static 401 error page that fires up on invalid user/pass entries or a hit on the cancel button, you can perform a meta refresh to redirect the visitor to a signup page. Bear in mind that in .htaccess you must not use absolute URLs (http://… or https://…) in the ErrorDocument 401 directive, and that on the error page you must use absolute URLs for CSS, images, links and whatnot because relative URIs don’t work there!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google removes the #6 penalty/filter/glitch

Google removed the position six penaltyAfter the great #6 Penalty SEO Panel Google’s head of the webspam dept. Matt Cutts digged out a misbehaving algo and sent it back to the developers. Two hours ago he stated:

When Barry asked me about “position 6″ in late December, I said that I didn’t know of anything that would cause that. But about a week or so after that, my attention was brought to something that could exhibit that behavior.

We’re in the process of changing the behavior; I think the change is live at some datacenters already and will be live at most data centers in the next few weeks.

 

So everything is fine now. Matt penalizes the position-six software glitch, and lost top positions will revert to their former rankings in a while. Well, not really. Nobody will compensate income losses, nor the time Webmasters spent on forums discussing a suspected penalty that actually was a bug or a weird side effect. However, kudos to Google for listening to concerns, tracking down and fixing the algo. And thanks for the update, Matt.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Avoiding the well known #4 SERP-hero-penalty …

Seb the red claw… I just have to link to North South Media’s neat collection of Search Action Figures.

Paul pretty much dislikes folks who don’t link to him, so Danny Sullivan and Rand Fishkin are well advised to drop a link every now and then, and David Naylor better gives him an interview slot asap. ;)

Google’s numbered “penalties”, esp. #6

As for numeric penalties in general … repeat("Sigh", ) … enjoy this brains trust moderated by Marty Weintraub (unauthorized):

Marty: Folks, please welcome Aaron Wall, who recently got his #6 penalty removed!

Audience: clap(26) sphinn(26)

The Gypsy: Sorry Marty but come on… this is complete BS and there is NO freakin #6 filter just like the magical minus 90…900 bla bla bla. These anomalies NEVER have any real consensus on a large enough data set to even be considered a viable theory.

A Red Crab: As long as Bill can’t find a plus|minus-n-raise|penalty patent, or at least a white paper or so leaked out from Google, or for all I care a study that provides proof instead of weird assumptions based on claims of webmasters jumping on todays popular WMW band wagon that aren’t plausible nor verifiable, such beasts don’t exist. There are unexplained effects that might look like a pattern, but in most cases it makes no sense to gather a few examples coming with similarities because we’ll never reach the critical mass of anomalies to discuss a theory worth more than a thumbs-down click.

Marty: Maybe Aaron is joking. Maybe he thinks he has invented the next light bulb.

Gamermk: Aaron is grasping at straws on this one.

Barry Welford: I would like this topic to be seen by many.

Audience: clap(29) sphinn(29)

The Gypsy: It is just some people that have DECIDED on an end result and trying to make various hypothesis fit the situation (you know, like tobacco lobby scientists)… this is simply bad form IMO.

Danny Sullivan: Well, I’ve personally seen this weirdness. Pages that I absolutely thought “what on earth is that doing at six” rather than at the top of the page. Not four, not seven — six. It was freaking weird for several different searches. Nothing competitive, either.

I don’t know that sixth was actually some magic number. Personally, I’ve felt like there’s some glitch or problem with Google’s ranking that has prevented the most authorative page in some instances from being at the top. But something was going on.

Remember, there’s no sandbox, either. We got that for months and months, until eventually it was acknowledge that there were a range of filters that might produce a “sandbox like” effect.

The biggest problem I find with these types of theories is they often start with a specific example, sometimes that can be replicated, then they become a catch-all. Not ranking. Oh, it’s the sandbox. Well no — not if you were an established site, it wasn’t. The sandbox was typicaly something that hit brand new sites. But it became a common excuse for anything, producing confusion.

Jim Boykin: I’ll jump in and say I truely believe in the 6 filter. I’ve seen it. I wouldn’t have believed it if I hadn’t seen it happen to a few sites.

Audience: clap(31) sphinn(31)

A Red Crab: Such terms tend to become a life of their own, IOW an excuse for nearly every way a Webmaster can fuck up rankings. Of course Google’s query engine has thresholds (yellow cards or whatever they call them) that don’t allow some sites to rank above a particular position, but that’s a symtom that doesn’t allow back-references to a particular cause, or causes. It’s speculation as long as we don’t know more.

IncrediBill: I definitely believe it’s some sort of filter or algo tweak but it’s certainly not a penalty which is why I scoff at calling it such. One morning you wake up and Matt has turned all the dials to the left and suddenly some criteria bumps you UP or DOWN. Sites have been going up and down in Google SERPs for years, nothing new or shocking about that and this too will have some obvious cause and effect that could probably be identified if people weren’t using the shotgun approach at changing their site

G1smd: By the time anyone works anything out with Google, they will already be in the process of moving the goalposts to another country.

Slightly Shady SEO: The #6 filter is a fallacy.

Old School: It certainly occured but only affected certain sites.

Danny Sullivan: Perhaps it would have been better called a -5 penalty. Consider. Say Google for some reason sees a domain but decides good, but not sure if I trust it. Assign a -5 to it, and that might knock some things off the first page of results, right?

Look — it could all be coincidence, and it certainly might not necessarily be a penalty. But it was weird to see pages that for the life of me, I couldn’t understand why they wouldn’t be at 1, showing up at 6.

Slightly Shady SEO: That seems like a completely bizarre penalty. Not Google’s style. When they’ve penalized anything in the past, it hasn’t been a “well, I guess you can stay on the frontpage” penalty. It’s been a smackdown to prove a point.

Matt Cutts: Hmm. I’m not aware of anything that would exhibit that sort of behavior.

Audience: Ugh … oohhhh … you weren’t aware of the sandbox, either!

Danny Sullivan: Remember, there’s no sandbox, either. We got that for months and months, until eventually it was acknowledge that there were a range of filters that might produce a “sandbox like” effect.

Audience: Bah, humbug! We so want to believe in our lame excuses …

Tedster: I’m not happy with the current level of analysis, however, and definitely looking for more ideas.

Audience: clap(40) sphinn(40)


Of course the panel above is fictional, respectively assembled from snippets which in some cases change the message when you read them in their context. So please follow the links.

I wouldn’t go that far to say there’s no such thing as a fair amount of Web pages that deserve a #1 spot on Google’s SERPs, but rank #6 for unknown reasons (perhaps link monkey business, staleness, PageRank flow in disarray, anchor text repetitions, …). There’s something worth investigating.

However, I think that labelling a discussion of glitches or maybe filters that don’t behave based on a way too tiny dataset “#6 penalty” leads to the lame excuse for literally anything phenomenon.

Folks who don’t follow the various threads closely enough to spot the highly speculative character of the beast, will take it as fact and switch to winter sleep mode instead of enhancing their stuff like Aaron did. I can’t wait for the first “How to escape the Google -5 penalty” SEO tutorial telling the great unwashed that a “+5″ revisit-after meta tag will heal it.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Getting URLs outta Google - the good, the popular, and the definitive way

Keep out GoogleThere’s more and more robots.txt talk in the SEOsphere lately. That’s a good thing in my opinion, because the good old robots.txt’s power is underestimated. Unfortunately it’s quite often misused or even abused too, usually because folks don’t fully understand the REP (by following “advice” from forums instead of reading the real thing, or at least my stuff ).

I’d like to discuss the REP’s capabilities assumed to make sure that Google doesn’t index particular contents from three angles:

The good way
If the major search engines would support new robots.txt directives that Webmasters really need, removing even huge chunks of content from Google’s SERPs –without collateral damage– via robots.txt would be a breeze.
The popular way
Shamelessly stealing Matt’s official advice [Source: Remove your content from Google by Matt Cutts]. To obscure the blatant plagiarism, I’ll add a few thoughts.
The definitive way
Of course that’s not the ultimate way, but that’s the way Google’s cookies crumble, currently. In other words: Google is working on a leaner approach, but that’s not yet announced, thus you can’t use it; you still have to jump through many hoops.

The good way

Caution: Don’t implement code from this section, the robots.txt directives discussed here are not (yet/fully) supported by search engines!

Currently all robots.txt statements are crawler directives. That means that they can tell behaving search engines how to crawl a site (fetching contents), but they’ve no impact on indexing (listing contents on SERPs). I’ve recently published a draft discussing possible REP tags for robots.txt. REP tags are indexer directives known from robots meta tags and X-Robots-Tags, which –as on-page respectively per-URL directives– require crawling.

The crux is that REP tags must be assigned to URLs. Say you’ve a gazillion of printer friendly pages in various directories that you want to deindex at Google, putting the “noindex,follow,noarchive” tags comes with a shitload of work.

How cool would be this robots.txt code instead:
Noindex: /*printable
Noarchive: /*printable

Search engines would continue to crawl, but deindex previously indexed URLs respectively not index new URLs from
/articles/printable/*.htm
/manuals/printable/*.pdf
/products/descriptions/*.php?format=printable&product=*
...

provided those URLs aren’t disallow’ed. They would follow the links in those documents, so that PageRank gathered by printer friendly pages wouldn’t be completely wasted. To apply an implicit rel-nofollow to all links pointing to printer friendly pages, so that those can’t accumulate PageRank from internal or external links, you’d add
Norank: /*printable

to the robots.txt code block above.

If you don’t like that search engines index stuff you’ve disallow’ed in your robots.txt from 3rd party signals like inbound links, and that Google accumulates even PageRank for disallow’ed URLs, you’d put:
Disallow: /unsearchable/
Noindex: /unsearchable/
Norank: /unsearchable/

To fix URL canonicalization issues with PHP session IDs and other tracking variables you’d write for example
Truncate-variable sessionID: /

and that would fix the duplicate content issues as well as the problem with PageRank accumulated by throw-away URLs.

Unfortunately, robots.txt is not yet that powerful, so please link to the REP tags for robotx.txt “RFC” to make it popular, and proceed with what you have at the moment.

Matt Cutts was kind enough to discuss Google’s take on contents excluded from search engine indexing in 10 minutes or less here:

You really should listen, the video isn’t that long.

In the following I’ve highlighted a few methods Matt has talked about:

Don’t link (very weak)
Although Google usually doesn’t index unlinked stuff, this can happen due to crawling based on sitemaps. Also, the URL might appear in linked referrer stats on other sites that are crawlable, and folks can link from the cold.
.htaccess / .htpasswd (Matt’s first recommendation)
Since Google cannot crawl password protected contents, Matt declares this method to prevent content from indexing safe. I’m not sure what will happen when I spread a few strong links to somebody’s favorite smut collection, perhaps I’ll test some day whether Google and other search engines list such a reference on their SERPs.
robots.txt (weak)
Matt rightly points out that Google’s cool robots.txt validator in the Webmaster Console is a great tool to develop, test and deploy proper robots.txt syntax that effectively blocks search engine crawling. The weak point is, that even when search engines obey robots.txt, they can index uncrawled content from 3rd party sources. Matt is proud of Google’s smart capabilities to figure out suiteble references like the ODP. I agree totally and wholeheartedly. Hence robots.txt in its current shape doesn’t prevent content from showing up in Google and other engines as well. Matt didn’t mention Google’s experiments with Noindex: support in robots.txt, which need improvement but could resolve this dilemma.
Robots meta tags (Google only, weak with MSN/Yahoo)
The REP tag “noindex” in a robots meta element prevents from indexing, and, once spotted, deindexes previously listed stuff - at least at Google. According to Matt Yahoo and MSN still list such URLs as references without snippets. Because only Google obeys “noindex” totally by wiping out even URL-only listings and foreign references, robots meta tags should be considered a kinda weak approach too. Also, search engines must crawl a page to discover this indexer directive. Matt adds that robots meta tags are problematic, because they’re buried on the pages and sometimes tend to get forgotten when no longer needed (Webmasters might do forget to take the tag down, respectively add it later on when search engines policies change, or work in progress gets released respectively outdated contents are taken down). Matt forgot to mention the neat X-Robots-Tags that can be used to apply REP tags in the HTTP header of non-HTML resources like images or PDF documents. Google’s X-Robots-Tag is supported by Yahoo too.
Rel-nofollow (kind of weak)
Although condoms totally remove links from Google’s link graphs, Matt says that rel-nofollow should not be used as crawler or indexer directive. Rel-nofollow is for condomizing links only, also other search engines do follow nofollow’ed links and even Google can discover the link destination from other links they gather on the Web, or grab from internal links inadvertently lacking a link condom. Finally, rel-nofollow requires crawling too.
URL removal tool in GWC (Matt’s second recommendation)
Taking Matt’s enthusiasm while talking about Google’s neat URL terminator into account, this one should be considered his first recommendation. Google provides tools to remove URLs from their search index since five years at least (way longer IIRC). Recently the Webmaster Central team has integrated those, as well as new functionality, into the Webmaster Console, donating it a very nice UI. The URL removal tools come with great granularity, and because the user’s site ownership is verified, it’s pretty powerful, safe, and shows even the progress for each request (the removal process lasts a few days). Its UI is very flexible and allows even revoking of previous removal requests. The wonderful little tool’s sole weak point is that it can’t remove URLs from the search index forever. After 90 days or possibly six months the erased stuff can pop up again.

Summary: If your site isn’t password protected, and you can’t live with indexing of disallow’ed contents, you must remove unwanted URLs from Google’s search index periodically. However, there are additional procedures that can support –but not guarantee!– deindexing. With other search engines it’s even worse, because those don’t respect the REP like Google, and don’t provide such handy URL removal tools.

The definitive way

Actually, I think Matt’s advice is very good. As long as you don’t need a permanent solution, and if you lack the programming skills to develop such a beast that works with all (major) search engines. I mean everybody can insert a robots meta tag or robots.txt statement, and everybody can semiyearly repeat URL removal requests with the neat URL terminator, but most folks are scared when it comes to conditional manipulation of HTTP headers to prevent stuff from indexing. However, I’ll try to explain quite safe methods that actually work (with Apache, not IIS) in the following examples.

First of all, if you really want that search engines don’t index your stuff, you must allow them to crawl it. And no, that’s not an oxymoron. At the moment there’s no such thing as an indexer directive on site-level. You can’t forbid indexing in robots.txt. All indexer directives require crawling of the URLs that you want to keep out of the SERPs. Of course that doesn’t mean you should serve search engine crawlers a book from each forbidden URL.

Lets start with robots.txt. You put
User-agent: *
Disallow: /images/
Disallow: /movies/
Disallow: /unsearchable/
 
User-agent: Googlebot
Disallow:
Allow: /
 
User-agent: Slurp
Disallow:
Allow: /

The first section is just a fallback.

(Here comes a rather brutal method that you can use to keep search engines out of particular directories. It’s not suitable to deal with duplicate content, session IDs, or other URL canonicalization. More on that later.)

Next edit your .htaccess file.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_URI} ^/unsearchable/
RewriteCond %{REQUEST_URI} !\.php
RewriteRule . /unsearchable/output-content.php [L]
</IfModule>

If you’ve .php pages in /unsearchable/ then remove the second rewrite condition, put output-content.php into another directory, and edit my PHP code below so that it includes the PHP scripts (don’t forget to pass the query string).

Now grab the PHP code to check for search engine crawlers here and include it below. Your script /unsearchable/output-content.php looks like:
<?php
@include("crawler-stuff.php"); // defines variables and functions
$isSpider = checkCrawlerIP ($requestUri);
if ($isSpider) {
@header("HTTP/1.1 403 Thou shalt not index this", TRUE, 403);
@header("X-Robots-Tag: noindex,noarchive,nosnippet,noodp,noydir");
exit;
}
 
$arr = explode("#", $requestUri);
$outputFileName = $arr[0];
$arr = explode("?", $outputFileName);
$outputFileName = $_SERVER["DOCUMENT_ROOT"] .$arr[0];
if (substr($outputFileName, -1, 1) == "/") {
$outputFileName .= "index.html";
}
if (file_exists($outputFileName)) {
// send the content type header
$contentType = "text/plain";
if (stristr($outputFileName, ".html")) $contentType ="text/html";
if (stristr($outputFileName, ".css")) $contentType ="text/css";
if (stristr($outputFileName, ".js")) $contentType ="text/javascript";
if (stristr($outputFileName, ".png")) $contentType ="image/png";
if (stristr($outputFileName, ".jpg")) $contentType ="image/jpeg";
if (stristr($outputFileName, ".gif")) $contentType ="image/gif";
if (stristr($outputFileName, ".xml")) $contentType ="application/xml";
if (stristr($outputFileName, ".pdf")) $contentType ="application/pdf";
@header("Content-type: $contentType");
@header("X-Robots-Tag: noindex,noarchive,nosnippet,noodp,noydir");
readfile($outputFileName);
exit;
}
 
// That’s not the canonical way to call the 404 error page. Don’t copy, adapt:
@header("HTTP/1.1 307 Oups, I displaced $outputFileName", TRUE, 307);
@header("Location: http://sebastians-pamphlets.com/404/");
exit;
?>

What does the gibberish above do? In .htaccess we rewrite all requests for resources stored in /unsearchable/ to a PHP script, which checks whether the request is from a search engine crawler or not.

If the requestor is a verified crawler (known IP or IP and host name belong to a major search engine’s crawling engine), we return an unfriendly X-Robots-Tag and an HTTP response code 403 telling the search engine that access to our content is forbidden. The search engines should assume that a human visitor receives the same response, hence they aren’t keen on indexing these URLs. Even if a search engine lists an URL on the SERPs by accident, it can’t tell the searcher anything about the uncrawled contents. That’s unlikely to happen actually, because the X-Robots-Tag forbids indexing (Ask and MSN might ignore these directives).

If the requestor is a human visitor, or an unknown Web robot, we serve the requested contents. If the file doesn’t exist, we call the 404 handler.

With dynamic content you must handle the query string and (expected) cookies yourself. PHP’s readfile() is binary safe, so the script above works with images or PDF documents too.

If you’ve an original search engine crawler coming from a verifiable server feel free to test it with this page (user agent spoofing doesn’t qualify as crawler, come back in a week or so to check whether the engines have indexed the unsearchable stuff linked above).

The method above is not only brutal, it wastes all the juice from links pointing to the unsearchable site areas. To rescue the PageRank, change the script as follows:

$urlThatDesperatelyNeedsPageRank = "http://sebastians-pamphlets.com/about/";
if ($isSpider) {
@header("HTTP/1.1 301 Moved permanently", TRUE, 301);
@header("Location: $urlThatDesperatelyNeedsPageRank");
exit;
}

This redirects crawlers to the URL that has won your internal PageRank lottery. Search engines will/shall transfer the reputation gained from inbound links to this page. Of course page by page redirects would be your first choice, but when you block entire directories you can’t accomplish this kind of granularity.

By the way, when you remove the offensive 403-forbidden stuff in the script above and change it a little more, you can use it to apply various X-Robots-Tags to your HTML pages, images, videos and whatnot. When a search engine finds an X-Robots-Tag in the HTTP header, it should ignore conflicting indexer directives in robots meta tags. That’s a smart way to steer indexing of bazillions of resources without editing them.

Ok, this was the cruel method; now lets discuss cases where telling crawlers how to behave is a royal PITA, thanks to the lack of indexer directives in robots.txt that provide the required granularity (Truncate-variable, Truncate-value, Order-arguments, …).

Say you’ve session IDs in your URLs. That’s one (not exactly elegant) way to track users or affiliate IDs, but strictly forbidden when the requestor is a search engine’s Web robot.

In fact, a site with unprotected tracking variables is a spider trap that would produce infinite loops in crawling, because spiders following internal links with those variables discover new redundant URLs with each and every fetch of a page. Of course the engines found suitable procedures to dramatically reduce their crawling of such sites, what results in less indexed pages. Besides joyless index penetration there’s another disadvantage - the indexed URLs are powerless duplicates that usually rank beyond the sonic barrier at 1,000 results per search query.

Smart search engines perform high sophisticated URL canonicalization to get a grip on such crap, but Webmasters can’t rely on Google & Co to fix their site’s maladies.

Ok, we agree that you don’t want search engines to index your ugly URLs, duplicates, and whatnot. To properly steer indexing, you can’t just block the crawlers’ access to URLs/contents that shouldn’t appear on SERPs. Search engines discover most of those URLs when following links, and that means that they’re ready to assign PageRank or other scoring of link popularity to your URLs. PageRank / linkpop is a ranking factor you shouldn’t waste. Every URL known to search engines is an asset, hence handle it with care. Always bother to figure out the canonical URL, then do a page by page permanent redirect (301).

For your URL canonicalization you should have an include file that’s available at the very top of all your scripts, executed before PHP sends anything to the user agent (don’t hack each script, maintaining so many places handling the same stuff is a nightmare, and fault-prone). In this include file put the crawler detection code and your individual routines that handle canonicalization and other search engine friendly cloaking routines.

View a Code example (stripping useless query string variables).

How you implement the actual canonicalization routines depends on your individual site. I mean, if you’ve not the coding skills necessary to accomplish that you wouldn’t read this entire section, wouldn’t you?

    Here are a few examples of pretty common canonicalization issues:

  • Session IDs and other stuff used for user tracking
  • Affiliate IDs and IDs used to track the referring traffic source
  • Empty values of query string variables
  • Query string arguments put in different order / not checking the canonical sequence of query string arguments (ordering them alphabetically is always a good idea)
  • Redundant query string arguments
  • URLs longer than 255 bytes
  • Server name confusion, e.g. subdomains like “www”, “ww”, “random-string” all serving identical contents from example.com
  • Case issues (IIS/clueless code monkeys handling GET-variables/values case-insensitive)
  • Spaces, punctuation, or other special characters in URLs
  • Different scripts outputting identical contents
  • Flawed navigation, e.g. passing the menu item to the linked URL
  • Inconsistent default values for variables expected from cookies
  • Accepting undefined query string variables from GET requests
  • Contentless pages, e.g. outputted templates when the content pulled from the database equals whitespace or is not available

Summary

Hiding contents from all search engines requires programming skills that many sites can’t afford. Even leading search engines like Google don’t provide simple and suitable ways to deindex content –respectively to prevent content from indexing– without collateral damage (lost/wasted PageRank). We desperately need better tools. Maybe my robots.txt extensions are worth an inspection.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

My plea to Google - Please sanitize your REP revamps

Standardization of REP tags as robots.txt directives

Google is confules on REP standards and robots.txtThis draft is kinda request for comments for search engine staff and uber search geeks interested in the progress of Robots Exclusion Protocol (REP) standardization (actually, every search engine maintains their own REP standard). It’s based on/extends the robots.txt specifications from 1994 and 1996, as well as additions supported by all major search engines. Furthermore it considers work in progress leaked out from Google.

In the following I’ll try to define a few robots.txt directives that Webmasters really need.

Show Table of Contents

Currently Google experiments with new robots.txt directives, that is REP tags like “noindex” adapted for robots.txt. That’s a welcomed and brilliant move.

Unfortunately, they got it totally wrong, again. (Skip the longish explanation of the rel-nofollow fiasco and my rant on Google’s current robots.txt experiments.)

Google’s last try to enhance the REP by adapting a REP tag’s value in another level was a miserable failure. Not because crawler directives on link-level are a bad thing, the opposite is true, but because the implementation of rel-nofollow confused the hell out of Webmasters, and still does.

Rel-Nofollow or how Google abused standardization of Web robots directives for selfish purposes

Don’t get me wrong, an instrument to steer search engine crawling and indexing on link level is a great utensil in a Webmaster’s toolbox. Rel-nofollow just lacks granularity, and it was sneakily introduced for the wrong purposes.

Recap: When Google launched rel-nofollow in 2005, they promoted it as a tool to fight comment spam.

From now on, when Google sees the attribute (rel=”nofollow”) on hyperlinks, those links won’t get any credit when we rank websites in our search results. This isn’t a negative vote for the site where the comment was posted; it’s just a way to make sure that spammers get no benefit from abusing public areas like blog comments, trackbacks, and referrer lists.

Technically spoken, this translates to “search engine crawlers shall/can use rel-nofollow links for discovery crawling, but indexers and ranking algos processing links must not credit link destinations with PageRank, anchor text, nor other link juice originating from rel-nofollow links”. Rel=”nofollow” meant rel=”pass-no-reputation”.

All blog platforms implemented the beast, and it seemed that Google got rid of a major problem (gazillions of irrelevant spam links manipulating their rankings). Not so the bloggers, because the spammers didn’t bother to check whether a blog dofollows inserted links or not. Despite all the condomized links the amount of blog comment spam increased dramatically, since the spammers were forced to attack even more blogs in order to earn the same amount of uncondomized links from blogs that didn’t update to a software version that supported rel-nofollow.

Experiment failed, move on to better solutions like Akismet, captchas or ajax’ed comment forms? Nope, it’s not that easy. Google had a hidden agenda. Fighting blog comment spam was just a snake oil sales pitch, an opportunity to establish rel-nofollow by jumping on a popular band wagon. In 2005 Google had mastered the guestbook spam problem already. Devaluing comment links in well structured pages like blog posts is as easy as doing the same with guestbook links, or identifying affiliate links. In other words, when Google launched rel-nofollow, blog comment spam was definitely not a major search quality issue any more.

Identifying paid links on the other hand is not that easy, because they often appear as editorial links within the content. And that was a major problem for Google, a problem that they weren’t able to solve algorithmically without cooperation of all webmasters, site owners, and publishers. Google actually invented rel-nofollow to get a grip on paid links. Recently they announced that Googlebot no longer follows condomized links (pre-Bigdaddy Google followed condomized links and indexed contents discovered from rel-nofollow links), and their cold war on paid links became hot.

Of course the sneaky morphing of rel-nofollow from “pass no reputation” to a full blown “nofollow” is just a secondary theater of war, but without this side issue (with regard to REP standardization) Google would have lost, hence it was decisive for the outcome of their war on paid links.

To stay fair, Danny Sullivan said twice that rel-nofollow is Dave Winer’s fault, and Google as the victim is not to blame.

Rel-nofollow is settled now. However, I don’t want to see Google using their enormous power to manipulate the REP for selfish goals again. I wrote this rel-nofollow recap because probably, or possibly, Google is just doing it once more:

Google’s “Noindex: in robots.txt” experiment

Google supports a Noindex: directive in robots.txt. It seems Google’s Noindex: blocks crawling like Disallow:, but additionally prevents URLs blocked with Noindex: both from accumulating PageRank as well as from indexing based on 3rd party signals like inbound links.

This functionality would be nice to have, but accomplishing it with “Noindex” is badly wrong. The REP’s “Noindex” value without an explicit “Nofollow” means “crawl it, follow its links, but don’t list it on SERPs”. With pagel-level directives (robots meta tags and X-Robots-Tags) Google handles “Noindex” exactly as defined, that means with an implicit “Follow”. Not so in robots.txt. Mixing crawler directives (Disallow:) with indexer directives (Noindex:) this way takes the “Follow” out of the game, because a search engine can’t follow links from uncrawled documents.

Webmasters will not understand that “Nofollow” means totally different things in robots.txt and meta tags. Also, this approach steals granularity that we need, for example for use with technically structured sitemap pages and other hubs.

According to Google their current interpretation of Noindex: in robots.txt is not yet set in stone. That means there’s an opportunity for improvement. I hope that Google, and other search engines as well, listen to the needs of Webmasters.

Dear Googlers, don’t take the above said as Google bashing. I know, and often wrote, that Google is the search engine that puts the most efforts in boring tasks like REP evolvement. I just think that a dog company like Google needs to take real-world Webmasters into the boat when playing with standards like the REP, for the sake of the cats. ;)

Recap: Existing robots.txt directives

The /path example in the following sections refers to any way to assign URIs to REP directives, not only complete URIs relative to the server’s root. Patterns can be useful to set crawler directives for a bunch of URIs:

  • *: any string in path or query string, including the query string delimiter “?”, multiple wildcards should be allowed.
  • $: end of URI
  • Trailing /: (not exactly a pattern) addresses a directory, its files and subdirectories, the subdirectorie’s files etc., for example
    • Disallow: /path/
      matches /path/index.html but not /path.html
    • /path
      matches both /path/index.html and /path.html, as well as /path_1.html. It’s a pretty common mistake to “forget” the trailing slash in crawler directives meant to disallow particular directories. Such mistakes can result in blocking script/page-URIs that should get crawled and indexed.

Please note that patterns aren’t supported by all search engines, for example MSN supports only file extensions (yet?).

User-agent: [crawler name]
Groups a set of instructions for a particular crawler. Crawlers that find their own section in robots.txt ignore the User-agent: * section that addresses all Web robots. Each User-agent: section must be terminated with at least one empty line.

Disallow: /path
Prevents from crawling, but allows indexing based on 3rd party information like anchor text and surrounding text of inbound links. Disallow’ed URLs can gather PageRank.

Allow: /path
Refines previous Disallow: statements. For example
Disallow: /scripts/
Allow: /scripts/page.php

tells crawlers that they may fetch http://example.com/scripts/page.php or http://example.com/scripts/page.php?article=1, but not any other URL in http://example.com/scripts/.

Sitemap: [absolute URL]
Announces XML sitemaps to search engines. Example:
Sitemap: http://example.com/sitemap.xml
Sitemap: http://example.com/video-sitemap.xml

points all search engines that support Google’s Sitemaps Protocol to the sitemap locations. Please note that sitemap autodiscovery via robots.txt doesn’t replace sitemap submissions. Google, Yahoo and MSN provide Webmaster Consoles where you not only can submit your sitemaps, but follow the indexing process (wishful thinking WRT particular SEs). In some cases it might be a bright idea to avoid the default file name “sitemap.xml” and keep the sitemap URLs out of robots.txt, sitemap autodiscovery is not for everyone.

Recap: Existing REP tags

REP tags are values that you can use in a page’s robots meta tag and X-Robots-Tag. Robots meta tags go to the HTML document’s HEAD section
<meta name="robots" content="noindex, follow, noarchive" />

whereas X-Robots-Tags supply the same information in the HTTP header
X-Robots-Tag: noindex, follow, noarchive

and thus can instruct crawlers how to handle non-HTML resources like PDFs, images, videos, and whatnot.

    Widely supported REP tags are:

  • INDEX|NOINDEX - Tells whether the page may be indexed (listed on SERPs) or not
  • FOLLOW|NOFOLLOW - Tells whether crawlers may follow links provided in the document or not
  • ALL|NONE - ALL = INDEX, FOLLOW (default), NONE = NOINDEX, NOFOLLOW
  • NOODP - tells search engines not to use page titles and descriptions pulled from DMOZ on their SERPs.
  • NOYDIR - tells Yahoo! search not to use page titles and descriptions from the Yahoo! directory on the SERPs.
  • NOARCHIVE - Google specific, used to prevent archiving (cached page copy)
  • NOSNIPPET - Prevents Google from displaying text snippets for your page on the SERPs
  • UNAVAILABLE_AFTER: RFC 850 formatted timestamp - Removes an URL from Google’s search index a day after the given date/time

Problems with REP tags in robots.txt

REP tags (index, noindex, follow, nofollow, all, none, noarchive, nosnippet, noodp, noydir, unavailable_after) were designed as page-level directives. Setting those values for groups of URLs makes steering search engine crawling and indexing a breeze, but also comes with more complexity and a few pitfalls as well.

  • Page-level directives are instructions for indexers and query engines, not crawlers. A search engine can’t obey REP tags without crawling the resource that supplies them. That means that not a single REP tag put as robots.txt statement shall be misunderstood as crawler directive.

    For example Noindex: /path must not block crawling, not even in combination with Nofollow: /path, because there’s still the implicit “archive” (= absence of Noarchive: /path). Providing a cached copy even of a not indexed page makes sense for toolbar users.

    Whether or not a search engine actually crawls a resource that’s tagged with “noindex, nofollow, noarchive, nosnippet” or so is up to the particular SE, but none of those values implies a Disallow: /path.

  • Historically, a crawler instruction on HTML element level overrules the robots meta tag. For example when the meta tag says “follow” for all links on a page, the crawler will not follow a link that is condomized with rel=”nofollow”.

    Does that mean that a robots meta tag overrules a conflicting robots.txt statement? Of course not in any case. Robots.txt is the gatekeeper, and so to say the “highest REP instance”. Actually, to this question there’s no absolute answer that satisfies everybody.

    A Webmaster sitting on a huge conglomerate of legacy code may want to totally switch to robots.txt directives, that means search engines shall ignore all the BS in ancient meta tags of pages created in the stone age of the Internet. Back then the rules were different. An alternative/secondary landing page’s “index,follow” from 1998 most probably doesn’t fly with 2008’s duplicate content filters and high sophisticated link pattern analytics.

    The Webmaster of a well designed brand new site on the other hand might be happy with a default behavior where page-level REP tags overrule site-wide directives in robots.txt.

  • REP tags used in robots.txt might refine crawler directives. For example a disallow’ed URL can accumulate PageRank, and may be listed on SERPs. We need at least two different directives ruling PageRank caluculation and indexing for uncrawlable resources (see below under Noodp:/Noydir:, Noindex: and Norank:).

    Google’s current approach to handle this with the Noindex: directive alone is not acceptable, we need a new REP tag to handle this case. Next up, when we introduce a new REP tag for use in robots.txt, we should allow it in meta tags and HTTP headers too.

  • In theory it makes no sense to maintain a directive that describes a default behavior. But why has the REP “follow” although the absence of “nofollow” perfectly expresses “follow”? Because of the way non-geeks think (try to explain why the value nil/null doesn’t equal empty/zero/blank to a non-geek. Not!).

    Implicit directives that aren’t explicitely named and described in the rules don’t exist for the masses. Even in the 10 commandments someone had to write “thou shalt not hotlink|scrape|spam|cloak|crosslink|hijack…” instead of a no-brainer like “publish unique and compelling content for people and make your stuff crawlable”. Unfortunately, that works the other way round too. If a statement (Index: or Follow:) is dependent on another one (Allow: respectively the absence of Disallow:) folks will whine, rant and argue when search engines ignore their stuff.

    Obviously we need at least Index:, Follow: and Archive to keep the standard usable and somewhat understandable. Of course crawler directives might thwart such indexer directives. Ignorant folks will write alphabetically ordered robots.txt files like
    Disallow: /cgi-bin/
    Disallow: /content/
    ...
    Follow: /cgi-bin/redirect.php
    Follow: /content/links/
    ...
    Index: /content/articles/

    without Allow: /content/links/, Allow: /content/articles/ and Allow: /cgi-bin/redirect.

    Whether or not indexer directives that require crawling can overrule the crawler directive Disallow: is open for discussion. I vote for “not”.

  • Applying REP tags on site-level would be great, but it doesn’t solve other problems like the need of directives on block and element level. Both Google’s section targeting as well as Yahoo’s robots-nocontent class name aren’t acceptable tools capable to instruct search engines how to handle content in particular page areas (advertising blocks, navigation and other templated stuff, links in footers or sidebar elements, and so on).

    Instead of editing bazillions of pages, templates, include files and whatnot to insert rel-nofollow/nocontent stuff for the sole purpose of sucking up to search engines, we need an elegant way to apply such micro-directives via robots.txt, or at least site-wide sets of instructions referenced in robots.txt. Once that’s doable, Webmasters will make use of such tools to improve their rankings, and not alone to comply to the ever changing search engine policies that cost the Webmaster community billions of man hours each year.

    I consider these robots.txt statements sexy:
    Nofollow a.advertising, div#adblock, span.cross-links: /path
    Noindex .inherited-properties, p#tos, p#privacy, p#legal: /path

    but that’s a wish list for another post. However, while designing site-wide REP statements we should at least think of block/element level directives.

Remember the rel-nofollow fiasco where a REP tag was used on HTML element level producing so much confusion and conflicts. Lets learn from past mistakes and make it perfect this time. A perfect standard can be complex, but it’s clear and unambiguous.

Priority settings

The REP’s command hierarchy must be well defined:

  1. robots.txt
  2. Page meta tags and X-Robots-Tags in the HTTP header. X-Robots-Tag values overrule conflicting meta tag values.
  3. [Future block level directives]
  4. Element level directives like rel-nofollow

That means, when crawling is allowed, page level instructions overrule robots.txt, and element level (or future block level) directives overrule page level instructions as well as robots.txt. As long as the Webmaster doesn’t revert the latter:

Priority-page-level: /path
Default behavior, directives in robots meta tags overrule robots.txt statements. Necessary to reset previous Priority-site-level: statements.

Priority-site-level: /path
Robots.txt directives overrule conflicting directives in robots meta tags and X-Robots-Tags.

Priority-site-level All: /path
Robots.txt directives overrule all directives in robots meta tags or provided elsewhere, because those are completely ignored for all URIs under /path. The “All” parameter would even dofollow nofollow’ed links when the robots.txt lacks corresponding Nofollow: statements.

Noindex: /path

Follow outgoing links, archive the page, but don’t list it on SERPs. The URLs can accumulate PageRank etcetera. Deindex previously indexed URLs.

[Currently Google doesn’t crawl Noindex’ed URLs and most probably those can’t accumulate PageRank, hence URLs in /path can’t distribute PageRank. That’s plain wrong. Those URLs should be able to pass PageRank to outgoing links when there’s no explicit Nofollow:, nor a “nofollow” meta tag respectively X-Robots-Tag.]

Norank: /path

Prevents URLs from accumulating PageRank, anchor text, and whatever link juice.

Makes sense to refine Disallow: statements in company with Noindex: and Noodp:/Noydir:, or to prevent TOS/contact/privacy/… pages and alike from sucking PageRank (nofollow’ing TOS links and stuff like that to control PageRank flow is fault-prone).

Nofollow: /path

The uber-link-condom. Don’t use outgoing links, not even internal links, for discovery crawling. Don’t credit the link destinations with any reputation (PageRank, anchor text, and whatnot).

Noarchive: /path

Don’t make a cached copy of the resource available to searchers.

Nosnippet: /path

List the resource with linked page title on SERPs, but don’t create a text snippet, and don’t reprint the description meta tag.

[Why don’t we have a REP tag saying “use my description meta tag or nothing”?]

Nopreview: /path

Don’t create/link an HTML preview of this resource. That’s interesting for subscriptions sites and applies mostly to PDFs, Word documents, spread sheets, presentations, and other non-HTML resources. More information here.

Noodp: /path

Don’t use the DMOZ title nor the DMOZ description for this URL on SERPs, not even when this resource is a non-HTML document that doesn’t supply its own title/meta description.

Noydir: /path

I’m not sure this one makes sense in robots.txt, because only Yahoo search uses titles and descriptions from the Yahoo directory. Anyway: “Don’t overwrite the page title listed on the SERPs with information pulled from the Yahoo directory, although I paid for it.”

Unavailable_after [date]: /path

Deindex the resource the day after [date]. The parameter [date] is put in any date or date/time format, if it lacks a timezone then GMT is assumed.

[Google’s RFC 850 obsession is somewhat weird. There are many ways to put a timestamp other than “25-Aug-2007 15:00:00 EST”.]

Truncate-variable [string|pattern]: /path

Truncate-value [string|pattern]: /path

In the search index remove the unwanted variable/value pair(s) from the URL’s query string and transfer PageRank and other link juice to the matching URL without those parameters. If this “bare URL” redirects, or is uncrawlable for other reasons, index it with the content pulled from the page with the more complex URL.

Regardless whether the variable name or the variable’s value matches the pattern, “Truncate_*” statements remove a complete argument from the query string, that is &variable=value. If after the (last) truncate operation the query string is empty, the querystring delimiter “?” (questionmark) must be removed too.

Order-arguments [charset]: /path

Sort the query strings of all dynamic URLs by variable name, then within the ordered variables by their values. Pick the first URL from each set of identical results as canonical URL. Transfer PageRank etcetera from all dupes to the canonical URL.

Lots of sites out there were developed by coders who are utterly challenged by all things SEO. Most Web developers don’t even know what URL canonicalization means. Those sites suffer from tons of URLs that all serve identical contents, just because the query string arguments are put in random order, usually inventing a new sequence for each script, function, or include file. Of course most search engines run high sophisticated URL canonicalization routines to prevent their indexes from too much duplicate content, but those algos can fail because every Web site is different.

I totally can resist to suggest a Canonical-uri /: /Default.asp statement that gathers all IIS default-document-URI maladies. Also, case issues shouldn’t get fixed with Case-insensitive-uris: / but by the clueless developers in Redmond.

Will all this come true?

Well, Google has silently started to support REP tags in robots.txt, it totally makes sense both for search engines as well as for Webmasters, and Joe Webmaster’s life would be way more comfortable having REP tags for robots.txt.

A better question would be “will search engines implement REP tags for robots.txt in a way that Webmasters can live with it?”. Although Google launched the sitemaps protocol without significant help from the Webmaster community, I strongly feel that they desperately need our support with this move.

Currently it looks like they will fuck up the REP, respectively the robots.txt standard, hence go grab your AdWords rep and choke her/him until s/he promises to involve Larry, Sergey, Matt, Adam, John, and the whole Webmaster Support Team for the sake of common sense and the worldwide Webmaster community. Thank you!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

No more RSS feeds in Google’s search results

Google killing RSS feedsFolks try all sorts of naughty things when by accident a blog’s feed outranks the HTML version of a post. Usually that happened mostly to not that popular blogs, or with very old posts and categorized feeds that contain ancient articles.

The problem seems to be that Google’s Web search doesn’t understand the XML structure of feeds, so that a feed’s textual contents get indexed like stuff from text files. Due to “subscribe” buttons and other links, feeds can gather more PageRank than some HTML pages. Interestingly .xml is considered an unknown file type, and advanced search doesn’t provide a way to search within XML files.

Now that has changed1. Googler Bogdan Stănescu posts on the German Webmaster blog2 We remove feeds from our search results:

As Webmasters many of you were probably worried that your RSS or Atom feeds could outrank the accompanying HTML pages in Google’s search results. The emergence of feeds in our search results could be a poor user experience:

1. Feeds increase the probability that the user gets the same search result twice.

2. Users who click on the feed link on a SERP may miss out on valuable content, which is only available on the HTML page referenced in the XML file.

For these reasons, we have removed feeds from our Web search results - with the exception of podcasts (feeds with media files).

[…] We are aware that in addition to the podcasts out there some feeds exist that are not linked with an HTML page, and that is why it is not quite ideal to remove all feeds from the search results. We’re still open for feedback and suggestions for improvements to the handling of feeds. We look forward to your comments and questions in the crawling, indexing and ranking section of our discussion forum for Webmasters. [Translation mine]

I’m not yet sure whether or not that’s ending in a ban of all/most XML documents. I hope they suppress RSS/Atom feeds only, and provide improved ways to search for and within other XML resources.

So what does that mean for blog SEO? Unless Google provides a procedure to prevent feeds from accumulating PageRank whilst allowing access for blog search crawlers that request feeds (I believe something like that is in the works), it’s still a good idea to nofollow all feed links, but there’s absolutely no reason to block them in robots.txt any more.

I think that’s a great move into the right direction, but a preliminary solution, though. The XML structure of feeds isn’t that hard to parse, and there are only so many ways to extract the URL of the HTML page. Then when a relevant feeds lands in a raw result set, Google should display a link to the HTML version on the SERP. What do you think?


1 Danny reminded me that according to Matt Cutts that’s going on for a few months now.

2 24 hours later Google published the announcement in English language too.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google to change the Robots Exclusion Protocol again

Google jumping the sharkWeb crawler directives, partly standardized in the Robots Exclusion Protocol (REP), evolved since 1994. Nowadays we’ve to deal with a conglomerate of not binding de facto standards and microformats, all of them extended by various organizations. All search engines claim that they obey “the standard”, but they refer to their very own REP implementation. In fact, each search engine supports a proprietary set of REP directives, differently from other players as a rule.

Google is the search engine putting the most efforts into Robots Exclusion Protocol (REP) evolvements. Their XML Sitemaps handling submissions instead of crawl restrictions changed the REP to a wider scope, the X-Robots-Tag brought us robots meta tags for non-HTML resources like PDF documents, images or video clips, and with Unavailable_after Google made a few clueless news sites happy. With the rel-nofollow microformat on the other hand, respectively its sneaky morphing from a spam fighting tool to its current shape, Google made nobody happy. Yahoo contributed the well meant but half-assed “robots-nocontent” class name, and of course “noydir” (it’s unlikely that any other engine will support those).

Now Google is working on new robots.txt syntax, and I am, politely put, not amused. Here is why I fear that Google is going to totally mess up the REP:

Google supports a “Noindex:” directive in robots.txt, which is treated as “Disallow:”1). Of course that’s an experiment, but if this behavior doesn’t change we’ll get a beast that is –with regard to the confusion it will produce– way more evil than the rel-nofollow fiasco.

  • A noindex-alias for disallow makes no sense, even when such syntax errors are out there.
  • Mixing crawler directives (allow/disallow) with indexer directives (noindex) is not always a bright idea. It’s bad enough that most Webmasters still believe that “Googlebot ranks their stuff”. (Actually, in some cases it can make sense. For example “nofollow” in robots meta tags (or at least for Google in REL attributes too) is both a crawler instruction as well as an indexer directive.)
  • Noindex and disallow are completely different commands. The REP’s noindex directive means “crawl it, follow its links, but don’t list it on the SERPs”. Disallow forbids crawling, but allows indexing URLs from directory listings or other inbound links.

Standards should be clear and unambiguous. Google must not redefine syntax and semantics that were in widespread use before Google even existed. I admit they’ve the power to fuck up the REP, but they also have “do no evil”.

Considering that Google is run by a bunch of smart engineers, I hope that they’ll do the right thing eventually. The right thing in this case is giving more power to REP evolvements, before questionable and selfish anti-search initiatives like ACAP ruin both the robots.txt consensus as well as the robots meta tag standard.

My idea of more power to REP evolvements is:

  • Sensible implementation of crawler/indexer-directives adapted from REP tags  in robots.txt. Applying page-level instructions ((no)index, (no)follow, noarchive, nosnippet, noodp/noydir, unavailable_after and hopefully nopreview) to groups of URIs is a great way to steer crawling and indexing, especially for sites which for various reasons cannot make use of the HTTP header’s X-Robots-Tag.
  • Implementation of block-level directives in robots.txt. Allowing Webmasters to apply crawler instructions like “noindex” or “nofollow” to particular page areas, like advertising blocks, duplicated text or repetitive navigation elements, addressed via HTML element names and class names and/or DOM-IDs, would be a very flexible instrument to steer crawling and indexing, and it could eleminate many points of failure.
  • Getting Webmasters, Publishers, SEOs and all major engines together to discuss possibly missing granularity and to develop a binding norm obeyed by all players.

The last one sounds like wishful thinking. The alternative is that Google (and, if possible, the bigger engines) talk with Webmasters and then launch the necessary REP extensions. The other engines will follow sooner or later. The publishers, although not getting all their desired ACAP restrictions, will be happy too. Standards like the Robots Exclusion Protocol should be developed by engineers.


1) Noindex: is not a plain Disallow:, there’s an interesting difference. In Google’s experiment both directives block crawling, but Disallow: allows URL-indexing based on 3rd party information, and Disallow:‘ed URLs can accumulate PageRank from internal as well as external links. Noindex:‘ed URLs on the other hand will not appear on SERPs as URL-only listing or with an ODP title and snippet, and I’m quite sure that they will not gather PageRank nor other link juice. That means links from any pages to such URLs get an implicit rel-nofollow in Google’s PageRank calculation, just like dangling links. This apparatus could be a great way to handle PageRank leaks (monthly blog archives, printer friendly pages and stuff like that), because shit happens, hence some links to such pages will slip through without condom. I admit that’s a neat idea, but its implementation is flawed because it doesn’t consider the implicit Follow: (that’s syntax Google doesn’t support in robots.txt). A better way to mark site areas which shall not gather PageRank without raping the REP would be a Norank: directive or so. Noindex: without a Nofollow: must not block crawling. Googlebot must fetch those URLs to follow their links.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Validate your robots.txt - Googlebot becomes smarter

Validate your robots.txt!Last week I reported that Google experiments with new crawler directives for use in robots.txt. Today Google has confirmed that Googlebot understands experimental REP syntax like Noindex:.

That means that forgotten –and, until recently, ignored– statements in your robots.txt might change the crawler’s behavior all of a sudden, without notice. I don’t know for sure which experimental crawler directives Google has implemented, but for example a line like
Noindex: /
in your robots.txt will now deindex your complete Web site.

“Noindex:” is not defined in the Robots Exclusion Protocol from 1994, and not mentioned in Google’s official documents.

John Müller from Google Zürich states:

At the moment we will usually accept the “noindex” directive in the robots.txt, but we are not yet at a point where we are willing to set it into stone and announce full support.

[…] I just want to remind everyone again that this is something that may still change over time. Be careful when playing with things like this.

My understanding of “be careful” is:

  • Create a separate section for Googlebot. Do not rely on directives addressing all Web robots. Especially when you’ve a Googlebot section already, Google’s crawler will ignore directives set under “all user agents” and process only the Googlebot section. Repeat all statements under User-agent: * in User-agent: Googlebot to make sure that Googlebot obeys them.
  • RTFM
  • Do not use other crawler directives than
    Disallow:
    Allow:
    Sitemap:
    in the Googlebot section.
  • Don’t mess-up pattern matching.
    * matches a sequence of characters
    $ specifies the end of the URL
    ? separates the path from the query string, you can’t use it as wildcard!
  • Validate your robots.txt with the cool robots.txt analyzer in your Google Webmaster Console.

Folks put the funniest stuff into their robots.txt, for example images or crawl delays like “Don’t crawl this site during our office hours”. Crawler directives from robots meta tags aren’t very popular, but they appear in many robots.txt files. Hence it makes sound sense to use what people express, regardless the syntax errors.

Also, having the opportunity to manage page specific crawler directives like “noindex”, “nofollow”, “noarchive” and perhaps even “nopreview” on site level is a huge time saver, and eliminates many points of failure. Kudos to Google for this initiative, I hope it will make it into the standards.

I’ll test the experimental robots.txt directives and post the results. Perhaps I’ll set up a live test like this one.

Take care.


Update: Here is the live test of suspected respectively desired new crawler directives for robots.txt. I’ve added a few unusual statements to my robots.txt and uploaded scripts to monitor search engine crawling. The test pages provide links to search queries so you can check whether Google indexed them or not.

Please don’t link to the crawler traps, I’ll update this post with my findings. Of course I appreciate links, so here is the canonical URL:
http://sebastians-pamphlets.com/validate-your-robots-txt-or-google-might-deindex-your-site/#live-robots-txt-test

Please note that you should not make use of the crawler directives below on production systems! Bear in mind that you can achive all that with simple X-Robots-Tags in the HTTP headers. That’s a bullet-proof way to apply robots meta tags to files without touching them, and it works with virtual URIs too. X-Robots-Tags are sexy, but many site owners can’t handle them due to various reasons, whereas corresponding robots.txt syntax would be usable for everybody (not suffering from restrictive and/or free hosts).

Noindex:

robots.txt:
Noindex: /repstuff/noindex.php

Expected behavior:
No crawling/indexing. It seems Google interprets “Nofollow:” as “Disallow:”.
Desired behavior:
“Follow:” is the REP’s default, hence Google should fetch everything and follow the outgoing links, but shouldn’t deliver Noindex’ed contents on the SERPs, not even as URL-only listings.
Google’s robots.txt validator:
http://sebastians-pamphlets.com/repstuff/noindex.php Blocked by line 30: Noindex: /repstuff/noindex.php
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled (possibly caused by an outdated robots.txt cache).
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noindex.php.
2007-11-23: indexed and cached a page linked only from noindex.php.
(If an outdated robots.txt cache falsely allowed crawling, the search result(s) should disappear shortly after the next crawl.)
2007-11-26: deindexed, the same goes for the linked page (without recrawling).
2007-12-07: appeared under “URLs restricted by robots.txt” in GWC.
2007-12-17: I consider this case closed. Noindex: blocks crawling, deindexes previously indexed pages, and is suspected to block incoming PageRank.

Nofollow:

robots.txt:
Nofollow: /repstuff/nofollow.php

Expected behavior:
Crawling, indexing, and following the links as if there’s no “Nofollow:”.
Desired behavior:
Crawling, indexing, and ignoring outgoing links.
Google’s robots.txt validator:
Line 31: Nofollow: /repstuff/nofollow.php Syntax not understood
http://sebastians-pamphlets.com/repstuff/nofollow.php Allowed
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from nofollow.php (21 Nov 2007 23:19:37 GMT, for some reason not logged properly).
2007-11-23: indexed and cached a page linked only from nofollow.php.
2007-11-26: recrawled, deindexed, no longer cached. The same goes for the linked page.
2007-11-28: cached again, the timestamp on the cached copy “27 Nov 2007 01:11:12 GMT” doesn’t match the last crawl on “2007-11-26 16:47:11 EST” (EST = GMT-5).
2007-12-07: recrawled, still deindexed, cached. Linked page recrawled, cached.
2007-12-17: recrawled, still deindexed (probably caused by near duplicate content on noarchive.php and other pages involved in this test), cached copy dated 2007-12-07. Cache of the linked page still dated 2007-11-21. I consider this case closed. Nofollow: doesn’t work as expected, Google doesn’t support this statement.

Noarchive:

robots.txt:
Noarchive: /repstuff/noarchive.php

Expected behavior:
Crawling, indexing, following links, but no “Cached” links on the SERPs and no access to cached copies from the toolbar.
Desired behavior:
Crawling, indexing, following links, but no “Cached” links on the SERPs and no access to cached copies from the toolbar.
Google’s robots.txt validator:
http://sebastians-pamphlets.com/repstuff/noarchive.php Allowed
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noarchive.php.
2007-11-23: indexed and cached a page linked only from noarchive.php.
2007-11-26: recrawled, deindexed, no longer cached. The linked page was deindexed without recrawling.
2007-11-28: cached again, the timestamp on the cached copy “27 Nov 2007 01:11:19 GMT” doesn’t match the last crawl on “2007-11-26 16:47:18 EST” (EST = GMT-5).
2007-11-29: recrawled, cache not yet updated.
2007-12-07: recrawled. Linked page recrawled.
2007-12-08: recrawled.
2007-12-11: recrawled the linked page, which is cached but not indexed.
2007-12-12: recrawled.
2007-12-17: still indexed, cached copy dated 2007-12-08. I consider this case closed. Noarchive: doesn’t work as expected, actually it does nothing although according to the robots.txt validator that’s supported –or at least known and accepted– syntax.

(It looks like Google understands Nosnippet: too, but I didn’t test that.)

Nopreview:

robots.txt:
Nopreview: /repstuff/nopreview.pdf

Expected behavior:
None, unfortunately.
Desired behavior:
No “view as HTML” links on the SERPs. Neither “nosnippet” nor “noarchive” suppress these helpful preview links, which can be pretty annoying in some cases. See NOPREVIEW: The missing X-Robots-Tag.
Google’s robots.txt validator:
Line 33: Nopreview: /repstuff/nopreview.pdf Syntax not understood
http://sebastians-pamphlets.com/repstuff/nopreview.pdf Allowed
Status:
Crawler requests of nopreview.pdf are logged here.
Google’s crawler / indexer:
2007-11-21: crawled the nopreview-pdf and the log page nopreview.php.
2007-11-23: indexed and cached the log file nopreview.php.
[2007-11-23: I replaced the PDF document with a version carrying a hidden link to an HTML file, and resubmitted it via Google’s add-url page and a sitemap.]
2007-11-26: The old version of the PDF is cached as a “view-as-HTML” version without links (considering the PDF was a captured print job, that’s a pretty decent result), and appears on SERPs for a quoted search. The page linked from the PDF and the new PDF document were not yet crawled.
2007-12-02: PDF recrawled. Googlebot followed the hidden link in the PDF and crawled the linked page.
2007-12-03: “View as HTML” preview not yet updated, the linked page not yet indexed.
2007-12-04: PDF recrawled. The preview link reflects the content crawled on 12/02/2007. The page linked from the PDF is not yet indexed.
2007-12-07: PDF recrawled. Linked page recrawled.
2007-12-09: PDF recrawled.
2007-12-10: recrawled linked page.
2007-12-14: PDF recrawled. Cached copy of the linked page dated 2007-12-11.
2007-12-17: I consider this case closed. Neither Nopreview: nor Noarchive: (in robots.txt since 2007-12-04) are suitable to suppress the HTML preview of PDF files.

Noindex: Nofollow:

robots.txt:
Noindex: /repstuff/noindex-nofollow.php
Nofollow: /repstuff/noindex-nofollow.php

Expected behavior:
No crawling/indexing, invisible on SERPs.
Desired behavior:
No crawling/indexing, and no URL-only listings, ODP titles/descriptions and stuff like that on the SERPs. “Noindex:” in combination with “Nofollow:” is a paraphrased “Disallow:”.
Google’s robots.txt validator:
http://sebastians-pamphlets.com/repstuff/noindex-nofollow.php Blocked by line 35: Noindex: /repstuff/noindex-nofollow.php
Line 36: Nofollow: /repstuff/noindex-nofollow.php Syntax not understood
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noindex-nofollow.php.
2007-11-23: indexed and cached a page linked only from noindex-nofollow.php.
2007-11-26: deindexed without recrawling, the same goes for the linked page.
2007-11-29: the cached copy retrieved on 11/21 reappeared.
2007-12-08: appeared under “URL restricted by robots.txt” in my GWC acct.
2007-12-17: Case closed, see Noindex: above.

Noindex: Follow:

robots.txt:
Noindex: /repstuff/noindex-follow.php
Follow: /repstuff/noindex-follow.php

Expected behavior:
No crawling/indexing, hence unfollowed links.
Desired behavior:
Crawling, following and indexing outgoing links, but no SERP listings.
Google’s robots.txt validator:
http://sebastians-pamphlets.com/repstuff/noindex-follow.php Blocked by line 38: Noindex: /repstuff/noindex-follow.php
Line 39: Follow: /repstuff/noindex-follow.php Syntax not understood
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noindex-follow.php.
2007-11-23: indexed and cached a page linked only from noindex-follow.php.
2007-11-26: deindexed without recrawling, the same goes for the linked page.
2007-12-08: appeared under “URL restricted by robots.txt” in my GWC acct.
2007-12-17: Case closed, see Noindex: above. Google didn’t crawl respectively deindexed despite the Follow: directive.

Index: Nofollow:

robots.txt:
Index: /repstuff/index-nofollow.php
Nofollow: /repstuff/index-nofollow.php

Expected behavior:
Crawling/indexing, following links.
Desired behavior:
Crawling/indexing but ignoring outgoing links.
Google’s robots.txt validator:
Line 41: Index: /repstuff/index-nofollow.php Syntax not understood
Line 42: Nofollow: /repstuff/index-nofollow.php Syntax not understood
http://sebastians-pamphlets.com/repstuff/index-nofollow.php Allowed
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from from index-nofollow.php.
2007-11-23: indexed and cached a page linked only from from index-nofollow.php.
2007-11-26: recrawled and deindexed. The linked page was deindexed witout recrawling.
2007-11-28: cached again, the timestamp on the cached copy “27 Nov 2007 01:11:26 GMT” doesn’t match the last crawl on “2007-11-26 16:47:25 EST” (EST = GMT-5).
2007-12-02: recrawled, the cached copy has vanished.
2007-12-07: recrawled. Linked page recrawled.
2007-12-08: recrawled.
2007-12-09: recrawled.
2007-12-10: recrawled.
2007-12-17: cached under 2007-12-10, not indexed. Linked page not cached, not indexed. I consider this case closed. Google currently doesn’t support Index: nor Nofollow:.

(I didn’t test Noodp: and Unavaliable_after [RFC 850 formatted timestamp]:, although both directives would make sense in robots.txt too.)

2007-11-20:
Added the experimental statements to robots.txt.

2007-11-21:
Linked the test pages. Google crawled all of them, including the pages submitted via links on test pages.

2007-11-23:
Most (all but the PDF document) URLs appear on search result pages. If an outdated robots.txt cache falsely allowed crawling although the WC-validator said “Blocked”, the search results should disappear shortly after the next crawl. I’ve created a sitemap for all URLs above and submitted it. Although I’ve –for the sake of this experiment– cloaked text as well as links and put white links on white background, luckily there is no “we caught you black hat spammer” message in my Webmaster Console. Googlebot nicely followed the cloaked links and indexed everything.

2007-11-26:
Google recrawled a few pages (noarchive.php, index-nofollow.php and nofollow.php), then deindexed all of them. Only the PDF document is indexed, and Google created a “view-as-HTML” preview from this captured print job. It seems that Google crawled something from another host than “*.googlebot.com”, unfortunately I didn’t log all requests. Probably the deindexing was done by a sneaky bot discovering the simple cloaking. Since the linked URLs are out and 3rd party links to them can’t ruin the experiment any longer, I’ve stopped cloaking and show the same text/links to bots and users (actually, users see one more link but that should be fine with Google). There’s still no “thou shalt not cloak” message in my GWC account. Well, those pages are fairly new, perhaps not fully settled in the search index, so lets see what happens next.

2007-11-28
The PDF file as well as the three pages recrawled on 11/26/2007 21:45:00 GMT were reindexed, but the timestamp on the cached copies says “retrieved on 27 Nov 2007 01:15:00 GMT”. Maybe the date/time displayed on cached page copies doesn’t reflect Ms. Googlebot’s “fetched” timestamp, but the time the indexer pulled the page out of the centralized crawl results cache 3.5 hours after crawling.

It seems the “Noarchive:” directive doesn’t work, because noarchive.php was crawled and indexed twice providing a cached page copy. My “Nopreview:” creation isn’t supported either, but maybe Dan Crow’s team picks it up for a future update of their neat X-Robots-Tags (I hope so).

The noindex’ed pages (noindex.php, noindex-nofollow.php and noindex-follow.php) weren’t recrawled and remain deindexed. Interestingly, they don’t appear under “URLs blocked by robots.txt” in my GWC account. Provided the first crawling and indexing on 11/21/2007 was a “mistake” caused by a way too long cached robots.txt, and the second crawl on 11/26/2007 obeyed the “Noindex:” but ignored the (implicit) “Follow:”, it seems that indeed Google interprets “Noindex:” in robots.txt as “Disallow:”. If that is so and if it’s there to stay, they’re going to totally mess up the REP.

<rant> I mean, promoting a rel-nofollow microformat that –at least at launchtime– didn’t share its semantics with the REP’s meta tags nor the –later introduced– X-Robots-Tags was evil bad enough. Ok, meanwhile they’ve corrected this conspiracy flaw by altering the rel-nofollow semantics step by step until “nofollow” in the REL attribute actually means nofollow  and no longer pass no reputation, at least at Google. Other engines still handle rel-nofollow according to the initial and officially still binding standard, and a gazillion Webmasters are confused as hell. In other words only a few search geeks understand what rel-nofollow is all about, but Google jauntily penalizes the great unwashed for not complying to the incomprehensible. By the way, that’s why I code rel="nofollow crap". Standards should be clear and unambiguous. </rant>

If Google really would introduce a “Noindex:” directive in robots.txt that equals “Disallow:”, that would be totally evil. A few sites out there might have an erroneous “Noindex:” statement in their robots.txt that could mean “Disallow:”, and it’s nice that Google tries to do them a favor. Screwing the REP for the sole purpose of complying to syntax errors on the other hand makes no sense. “Noindex” means crawl it, follow its links, but don’t index it. Semantically “Noindex: Nofollow:” equals “Disallow:”, but a “Noindex:” alone implies a “Follow:”, hence crawling is not only allowed but required.

I really hope that we watch an experiment in its early stage, and that Google will do the right thing eventually. Allowing the REP’s page specific crawler directives in robots.txt is a fucking brilliant move, because technically challenged publishers can’t handle the HTTP header’s X-Robots-Tag, and applying those directives to groups of URIs is a great method to steer crawling and indexing not only with static sites.

Dear Google engineers, please consider the nopreview directive too, and implement (no)index, (no)follow, noarchive, nosnippet, noodp/noydir and unavailable_after with the REP’s meaning. And while you’re at it, I want block level instructions in robots.txt too. For example
Area: /products/ DIV.hMenu,TD#bNav,SPAN.inherited "noindex,nofollow"

could instruct crawlers to ignore duplicated properties in product descriptions and the horizontal menu as well as the navigation elements in a table cell with the DOM-ID “bNav” at the very bottom of all pages in /products/,
Area: / A.advertising REL="nofollow"

could condomize all links with the class name “advertising”, and so on.

2007-11-29
The pages linked from the test pages still don’t come up in search results, noarchive.php was recrawled and remains cached, the cached copy of noindex-nofollow.php retrieved on 11/21/2007 reappeared (probably a DC roller coaster issue).

2007-11-30
Three URLs remain indexed: nopreview.pdf, noarchive.php and noindex-nofollow.php. The cached copies show the content crawled on Nov/21/2007. Everything else is deindexed. That’s not to stay (index roller coaster).
As a side note: the URL from my first noindex-robots.txt test appeared in my GWC account under “URLs restricted by robots.txt (Nov/27/2007)”, three days after the unsuccessful crawl.

2007-12-02
A few pages were recrawled, Googlebot followed the hidden link in the PDF file.

2007-12-03
In my GWC crawl stats noindex-nofollow.php appeared under “URLs restricted by robots.txt”, but it’s still indexed.

2007-12-04
The preview (cache) of nopreview.pdf was updated. Since obviously Nopreview: doesn’t work, I’ve added
Noarchive: /repstuff/nopreview.pdf

to my robots.txt. Lets see whether Google removes the cache respectively the HTML preview or not.

2007-12-06
Shortly after the change in robots.txt (Noarchive: /repstuff/nopreview.pdf) Googlebot recrawled the PDF file on 12/04/2007. Today it’s still cached, the HTML preview is still available and linked from SERPs.

2007-12-07
Googlebot has recrawled a few pages. Everything except noarchive.php and nopreview.pdf is deindexed.

2007-12-17
I consider the test closed, but I’ll keep the test pages up so that you can monitor crawling and indexing yourself. Noindex: is the only directive that somewhat works, but it’s implemented completely wrong and is not acceptable in its current shape.

Interestingly the sitemaps report in my GWC account says that 9 pages from 9 submitted URLs were indexed. Obviously “indexed” means something like “crawled at least once, perhaps indexed, maybe not, so if you want to know that definitively then get your lazy butt to check the SERPs yourself”. How expensive would it be to tell something like “Total URLs in sitemap: 9 | Indexed URLs in sitemap: 2″?



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Q&A: An undocumented robots.txt crawler directive from Google

What's the fuss about noindex in Google's robots.txt?Blogging should be fun every now and then. Today I don’t tell you anything new about Google’s secret experiments with the robots exclusion protocol. I ask you instead, because I’m sure you know your stuff. Unfortunately, the Q&A on undocumented robots.txt syntax from Google’s labs utilizes JavaScript, so perhaps it looks somewhat weird in your feed reader.

Q: Please look at this robots.txt file and figure out why it’s worth a Q&A with you, my dear reader:


User-Agent: *
Disallow: /
Noindex: /

Ok, click here to show the first hint.

I know, this one was a breeze, so here comes your challenge.
Q: Which crawler directive used in the robots.txt above was introduced 1996 in the Robots Exclusion Protocol (REP), but was not defined in its very first version from 1994?

Ok, click here to show the second hint.

Congrats, you are smart. I’m sure you don’t need to lookup the next answers.
Q: Which major search engine has a team permanently working on REP extensions and releases those quite frequently, and who is the engineer in charge?

Ok, click here to show the third hint.

Exactly. Now we’ve gathered all the pieces of this robots.txt puzzle.
Q: Could you please summarize your cognitions and conclusions?

Ok, click here to show the fourth hint.

Thank you, dear reader! Now lets see what we can dig out. If the appearance of a “Noindex:” directive in robots.txt is an experiment, it would make sense that Ms. Googlebot understands and obeys it. Unfortunetely, I sold all the source code I’ve stolen from Google and didn’t keep a copy for myself, so I need to speculate a little.

Last time I looked, Google’s cool robots.txt validator emulated crawler behavior, that means that the crawlers understood syntax the validator didn’t handle correctly. Maybe this was changed in the meantime, perhaps the validator pulls its code from the “real thing” now, or at least the “Noindex:” experiment may have found its way into the validator’s portfolio. So I thought that testing the newish robots.txt statement “Noindex:” in the Webmaster Console is worth a try. And yes, it told me that Googlebot understands this command, and interprets it as “Disallow:”.
Blocked by line 27: Noindex: /noindex/

Since validation is no proof of crawler behavior, I’ve set up a page “blocked” with a “Noindex:” directive in robots.txt and linked it in my sidebar. The noindex statement was in place long enough before I’ve uploaded and linked the spider trap, so that the engines shouldn’t use a cached robots.txt when they follow my links. My test is public, feel free to check out my robots.txt as well as the crawler log.

While I’m waiting for the expected growth of my noindex crawler log, I’m speculating. Why the heck would Google use a new robots.txt directive which behaves like the good old Disallow: statement? Makes no sense to me.

Lets not forget that this mysterious noindex statement was discovered in the robots.txt of Google’s ad server, not in the better known and closely watched robots.txt of google.com. Google is not the only search engine trying to better understand client sided code. None of the major engines should be interested in crawling ads for ranking purposes. The MSN/LiveSearch referrer spam fiasco demonstrates that search engine bots can fetch and render Google ads outputted in iFrames on pagead2.googlesyndication.com.

Since nobody supports Google’s X-Robots-Tag (sending “noindex” and other REP directives in the HTTP header) until today, maybe the engines have a silent deal that content marked with “Noindex:” in robots.txt shouldn’t be indexed. Microsoft’s bogus spam bot which doesn’t bother with robots.txt because it somewhat hapless tries to emulate a human surfer is not considered a crawler, it’s existence just proves that “software shop” is not a valid label for M$.

This theory has a few weak points, but it could point to something. If noindex in robots.txt really prevents from indexing of contents crawled by accident, or non-HTML contents that can’t supply robots meta tags, that would be a very useful addition to the robots exclusion protocol. Of course we’d then need Noarchive:, Nofollow: and Nopreview: too, probably more but I’m not really in a greedy mood today.

Back to my crawler trap. Refreshing the log reveals that 30 minutes after spreading links pointing to it, Googlebot has fetched the page. That seems to prove that the Noindex: statement doesn’t prevent from crawling, regardless the false (?) information handed out by Google’s robots.txt validator.

(Or didn’t I give Ms. Googlebot enough time to refetch my robots.txt? Dunno. The robots.txt copy in my Google Webmaster Console still doesn’t show the Noindex: statement, but I doubt that’s the version Googlebot uses because according to the last-downloaded timestamp in GWC the robots.txt has been changed at the time of the download. Never mind. If I was way too impatient, I still can test whether a newly discovered noindex directive in robots.txt actually deindexes stuff or not.)

On with the show. The next interesting question is: Will the crawler trap page make it in Google’s search index? Without the possibly non-effective noindex directive a few hundred links should be able to accomplish that. Alas, a quoted search query delivers zilch so far.

Of course I’ve asked Google for more information, but didn’t receive a conclusive answer so far. While waiting for an official statement, I take a break from live blogging this quick research in favor of terrorizing a few folks with respectless blog comments. Stay tuned. Be right back.


Well, meanwhile I had dinner, the kids fell asleep –hopefully until tomorrow morning–, but nothing else happened. A very nice and friendly Googler tries to find out what the noindex in robots.txt fuss is all about, thanks and I can’t wait! However, I suspect the info is either forgotten or deeply buried in some well secured top secret code libraries, hence I’ll push the red button soon.


Thanks to Google’s great Webmaster Central team, especially Susan, I learned that I was flogging a dead horse. Here is Google’s take on Noindex in robots.txt:

As stated in my previous note, I wasn’t aware that we recognized any directives other than Allow/Disallow/Sitemap, so I did some asking around.

Unfortunately, I don’t have an answer that I can currently give you. […] I can’t contribute any clarifications right now.

Thank you Susan!

Update: John Müller from Google has just confirmed that their crawler understands the Noindex: syntax, but it’s not yet set in stone.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13  Next Page »