No more RSS feeds in Google’s search results

Google killing RSS feedsFolks try all sorts of naughty things when by accident a blog’s feed outranks the HTML version of a post. Usually that happened mostly to not that popular blogs, or with very old posts and categorized feeds that contain ancient articles.

The problem seems to be that Google’s Web search doesn’t understand the XML structure of feeds, so that a feed’s textual contents get indexed like stuff from text files. Due to “subscribe” buttons and other links, feeds can gather more PageRank than some HTML pages. Interestingly .xml is considered an unknown file type, and advanced search doesn’t provide a way to search within XML files.

Now that has changed1. Googler Bogdan Stănescu posts on the German Webmaster blog2 We remove feeds from our search results:

As Webmasters many of you were probably worried that your RSS or Atom feeds could outrank the accompanying HTML pages in Google’s search results. The emergence of feeds in our search results could be a poor user experience:

1. Feeds increase the probability that the user gets the same search result twice.

2. Users who click on the feed link on a SERP may miss out on valuable content, which is only available on the HTML page referenced in the XML file.

For these reasons, we have removed feeds from our Web search results - with the exception of podcasts (feeds with media files).

[…] We are aware that in addition to the podcasts out there some feeds exist that are not linked with an HTML page, and that is why it is not quite ideal to remove all feeds from the search results. We’re still open for feedback and suggestions for improvements to the handling of feeds. We look forward to your comments and questions in the crawling, indexing and ranking section of our discussion forum for Webmasters. [Translation mine]

I’m not yet sure whether or not that’s ending in a ban of all/most XML documents. I hope they suppress RSS/Atom feeds only, and provide improved ways to search for and within other XML resources.

So what does that mean for blog SEO? Unless Google provides a procedure to prevent feeds from accumulating PageRank whilst allowing access for blog search crawlers that request feeds (I believe something like that is in the works), it’s still a good idea to nofollow all feed links, but there’s absolutely no reason to block them in robots.txt any more.

I think that’s a great move into the right direction, but a preliminary solution, though. The XML structure of feeds isn’t that hard to parse, and there are only so many ways to extract the URL of the HTML page. Then when a relevant feeds lands in a raw result set, Google should display a link to the HTML version on the SERP. What do you think?


1 Danny reminded me that according to Matt Cutts that’s going on for a few months now.

2 24 hours later Google published the announcement in English language too.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Nominate a red crab in the 2007 Search Blog Awards!

Nominate the red SEO crabToday Loren asked for selfish nominations, thus everybody posts a call for action.
So did I:

• Best search related pamphlets
I hereby selfishly submit my blog.

To no avail:

Sebastian, we’re not going to have a category for Best Pamphlets, but good try :)

There’s no such thing as a Best Crabby Search Pamphlets category just because my blog would be the sole candidate? Ok, I understand that. Really. I didn’t even swear. Yet.

So here’s my call for action. Nominate your favorite blog (that’s mine of course!) in any of the following categories that match:

  • Best SEO Blog
    You’d expect more marketing stuff from an SEO blog.
  • Best SEM Blog
    You’d expect even more marketing stuff, as well as PPC and whatnot. I suck on both.
  • Best SEO Plugin for Wordpress
    I never wrote a WordPress plugin. Actually, this year I hate WordPress because they messed up the database structure in version 2.3 without providing any documentation or at least a reasonable migration procedure. Also their coding standards suck ass and make me puke whenever I see WordPress code.
  • Best Search Agency Resource Blog
    My employers don’t blog.
  • Best Link Building Blog
    Link building pamphlets are rare nowadays.
  • Best Social Media Marketing or Optimization Blog
    I don’t game social media.
  • Best Local Search Blog
    I’m happy when I find my shoes before I leave the house, hence I can’t give any advice on local search.
  • Best Video Search Blog
    I watch x-rated videos only. Probably posting geeky clips doesn’t qualify me.
  • Best Mobile Search Blog
    When I’m on the road I usually search until I give up and ask a cabby for an escort. Cheating this way makes sure I’m not always too late, but doesn’t qualify me for mobile search consultancy.
  • Best Google Blog Not Owned by Google
    I’m not in Google news.
  • Best Search Engine Corporate Blog (owned by the search engines)
    Although I’ve developed a tiny search engine years ago, I fear that smutty results don’t count.
  • Best Contextual Advertising Blog
    My organic traffic is cheaper, and probably as reliable as PPC campaigns.
  • Best Affiliate Marketing Blog
    I sold two Seobook subscriptions recently, does that count?
  • Best Search Engine Community/Forum
    I visit Sphinn and the Google Webmaster forum and never will launch a new forum again.
  • Best New Search Engine of 2007
    See above.
  • Best Search Engine Research Blog
    I revealed that Microsoft plans to relaunch Live Search as porn affiliate program, why eBay and Wikipedia rule Google’s SERPs, and more SEO research like that.
  • Best Search Linkbait of 2007
    When I try it, folks bury it.
  • Breakout Blog of 2007
    I’m blogging since 2005 but moved my blog away from blogspot this year.
  • Best Search Conference Coverage of 2007
    I don’t even attend conferences.
  • Best Search Conference Coverage in Photos
    See above.
  • Best Search Marketing Facebook Group
    Facebook killed my account for spamming or so.
  • Most Giving Search Blogger
    I can’t give away a fraction of Bill Slawski’s great insights.
  • Best Independent Search Blog (not owned by media company or marketing agency)
    What does that mean? Ok, I’m in.
  • Best Search Blog Post of 2007
    I wrote a dull book on redirects, and more.

Oh well. Instead of nominating my stuff better convince Search Engine Journal that they really need a Crabby Pamphlets Category. Or try Category #16 at Performancing.

Update December/28/2007: YAY! Thank you all! Now you can vote for my pamphlets in the “Best SEO Blog of 2007″ category at the SEJ Search Blog Award 2007 contest. Here are the candidates:

It truly is an honor just to be nominated together with these great SEO bloggers.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

BlogCatalog needs professional help

BlogCatalog DevilA while ago I helped BlogCatalog to fix an issue with their JavaScript click tracking that Google considered somewhat crappy. The friendly BlogCatalog guys said thanks, and since then joining BC was on my ToDo-list because it seemed to be a decent service.

Recently I missed my cute red crab icon in a blog’s sidebar widget, realized that it’s powered by BlogCatalog and not MyBlogLog, so I finally signed up.

Roughly 24 hours later I was quite astonished as I received this email:

BlogCatalog - Submission Declined: Sebastian´s Pamphlets

Dear Sebastian,

Thank you for submitting your blog Sebastian`s Pamphlets (http://sebastians-pamphlets.com/) to BlogCatalog.

Unfortunately upon reviewing your blog we are unable to grant it access to the directory.

Your blog was declined for the following reason:

* You did not add a link back to Blog Catalog from your website.
To add a link visit: http://www.blogcatalog.com/buttons.php

If you believe this to be a mistake, you can login to Blog Catalog ( http://www.blogcatalog.com/blogs/manage_blog.html ) and change anything which may have caused it to get declined. After updating your blog, it will be put back into the submission queue.

If you have any questions/comments/suggestions/ideas please feel free to contact us.

Thanks,
BlogCatalog

Crap on, I followed the instructions on http://www.blogcatalog.com/buttons.php:

Meta Tag Verification

Id you’d rather not add a link back to BlogCatalog you can alternatively copy the meta tag listed below and paste it in your site’s home page in the first <head> section of the page, before the first <body> section.

<meta name=”blogcatalog” content=”9BC8674180″ />

It’s laughable to talk about the “first HEAD section” because an HTML file can have only one. Also having more than one BODY section is certainly not compliant to any standard. But bullshit aside, they clearly state that they’re fine with a meta tag if a blogger refuses to add a reciprocal link or even a pile of server sided code that slows down each and every page.

If I remember correctly, BC folks accused of hoarding PageRank defended their policy with statements like

I should quickly clear up that we provide also widgets and meta tags to verify ownership for anyone who doesn’t want to link back to us. We understand PageRank is sacred to many of our bloggers and give them the options to preserve their PR. [emphasis mine, also I’ve removed typos]

Not that I care much about PageRank leaks, but I never link to directories. And why should I when they can verify my submission in other ways?

Obviously, BlogCatalog staff can’t be bothered to view my home page’s source code, and they’ve no scripts capable to find the meta tag
<meta name=”blogcatalog” content=”9BC8674180″ />

in my one and only and therefore first HEAD section.

The meta tag verification is somewhat buried on the policy page, it looks like BlogCatalog chases inbound links no matter what it costs. Dear BlogCatalog, in my case it costs reputation. You guys don’t really think that I send you a private message so that you can silently approve the declined sign up, don’t you? I’m pretty sure that you treat others the same way. Either dump the meta tag verification, or play by your very own rules.

It seems to me that BlogCatalog needs more professional advice from bright consultants (scroll down to Andy’s full disclosure).

Update: A few hours after publishing this post my submission got approved.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Upgrading from IIS/ASP to Apache/PHP

Upgrade from Windows/IIS/ASP to Unix/Apache/PHPOnce you’re sick of IIS/ASP maladies you want to upgrade your Web site to utilize standardized technologies and reliable OpenSource software. On an Apache Web server with PHP your .asp scripts won’t work, and you can’t run MS-Access “databases” and such stuff under Apache.

Here is my idea of a smoothly migration from IIS/ASP to Apache/PHP. Grab any Unix box from your hoster’s portfolio and start over.

(Recently I got a tiny IIS/ASP site about uses & abuses of link condoms and moved it to an Apache server. I’m well known for brutal IIS rants, but so far I didn’t discuss a way out of such a dilemma, so I thought blogging this move could be a good idea.)

I don’t want to make this piece too complex, so I skip database and code migration strategies. Read Mike Hillyer’s article Migrating from Microsoft Access/MS-SQL to MySQL, and try tools like ASP to PHP. (With my tiny link condom site I overwrote the ASP code with PHP statements in my primitive text editor.)

From an SEO perspective such an upgrade comes with pitfalls:

  • Changing file extensions from .asp to .php is not an option. We want to keep the number of unavoidable redirects as low as possible.
  • Default.asp is usually not configured as a valid default document under Apache, hence requests of http://example.com/ run into 404 errors.
  • Basic server name canonicalization routines (www vs. non-www) from ASP scripts are not convertible.
  • IIS-URIs are not case sensitive, that means that /Default.asp will 404 on Apache when the filename is /default.asp. Usually there are lowercase/uppercase issues with query string variables and values as well.
  • Most probably search engines have URL variants in their indexes, so we want to adapt their URL canonicalization, at least where possible.
  • HTML editors like Microsoft Visual Studio tend to duplicate the HTML code of templated page areas. Instead of editing menus or footers in all scripts we want to encapsulate them.
  • If the navigation makes use of relative links, we need to convert those to absolute URLs.
  • Error handling isn’t convertible. Improper error handling can cause decreasing search engine traffic.

Running /default.asp, /home.asp etc. as PHP scripts

When you upload an .asp file to an Apache Web server, most user agents can’t handle it. Browsers treat them as unknown file types and force downloads instead of rendering them. Next those files aren’t parsed for PHP statements, provided you’ve rewritten the ASP code already.

To tell Apache that .asp files are valid PHP scripts outputting X/HTML, add this code to your server config or your .htaccess file in the root:
AddType text/html .asp
AddHandler application/x-httpd-php .asp

The first line says that .asp files shall be treated as HTML documents, and should force the server to send a Content-Type: text/html HTTP header. The second line tells Apache that it must parse .asp files for PHP code.

Just in case the AddType statement above doesn’t produce a Content-Type: text/html header, here is another way to tell all user agents requesting .asp files from your server that the content type for .asp is text/html. If you’ve mod_headers available, you can accomplish that with this .htaccess code:
<IfModule mod_headers.c>
SetEnvIf Request_URI \.asp is_asp=is_asp
Header set "Content-type" "text/html" env=is_asp
Header set imagetoolbar "no"
</IfModule>

(The imagetoolbar=no header tells IE to behave nicely; you can use this directive in a meta tag too.)
If for some reason mod_headers doesn’t work well with mod_setenvif, giving 500 error codes or so, then you can set the content-type with PHP too. Add this to a PHP script file which is included in all your scripts at the very top:
@header("Content-type: text/html", TRUE);

Instead of “text/html” alone, you can define the character set too: “text/html; charset=UTF-8″

Sanitizing the home page URL by eliminating “default.asp”

Instead of slowing down Apache by defining just another default document name (DirectoryIndex index.html index.shtml index.htm index.php [...] default.asp), we get rid of “/default.asp” with this “/index.php” script:
<?php
@require("default.asp");
?>

Now every request of http://example.com/ executes /index.php which includes /default.asp. This works with subdirectories too.

Just in case someone requests /default.asp directly (search engines keep forgotten links!), we perform a permanent redirect in .htaccess:
Redirect 301 /default.asp http://example.com/
Redirect 301 /Default.asp http://example.com/

Converting the ASP code for server name canonicalization

If you find ASP canonicalization routines like
<%@ Language=VBScript %>
<%
if strcomp(Request.ServerVariables("SERVER_NAME"), "www.example.com", vbCompareText) = 0 then
Response.Clear
Response.Status = "301 Moved Permanently"
strNewUrl = Request.ServerVariables("URL")
if instr(1,strNewUrl, "/default.asp", vbCompareText) > 0 then
strNewUrl = replace(strNewUrl, "/Default.asp", "/")
strNewUrl = replace(strNewUrl, "/default.asp", "/")
end if
if Request.QueryString <> "" then
Response.AddHeader "Location","http://example.com" & strNewUrl & "?" & Request.QueryString
else
Response.AddHeader "Location","http://example.com" & strNewUrl
end if
Response.End
end if
%>

(or the other way round) at the top of all scripts, just select and delete. This .htaccess code works way better, because it takes care of other server name garbage too:
RewriteEngine On
RewriteCond %{HTTP_HOST} !^example\.com [NC]
RewriteRule (.*) http://example.com/$1 [R=301,L]

(you need mod_rewrite, that’s usually enabled with the default configuration of Apache Web servers).

Fixing case issues like /script.asp?id=value vs. /Script.asp?ID=Value

Probably a M$ developer didn’t read more than the scheme and server name chapter of the URL/URI standards, at least I’ve no better explanation for the fact that these clowns made the path and query string segment of URIs case-insensitive. (Ok, I have an idea, but nobody wants to read about M$ world domination plans.)

Just because –contrary to Web standards– M$ finds it funny to serve the same contents on request of /Home.asp as well as /home.ASP, such crap doesn’t fly on the World Wide Web. Search engines –and other Web services which store URLs– treat them as different URLs, and consider everything except one version duplicate content.

Creating hyperlinks in HTML editors by picking the script files from the Windows Explorer can result in HREF values like “/Script.asp”, although the file itself is stored with an all-lowercase name, and the FTP client uploads “/script.asp” to the Web server. There are more ways to fuck up file names with improper use of (leading) uppercase characters. Typos like that are somewhat undetectable with IIS, because the developer surfing the site won’t get 404-Not found responses.

Don’t misunderstand me, you’re free to camel-case file names for improved readability, but then make sure that the file system’s notation matches the URIs in HREF/SRC values. (Of course hyphened file names like “buy-cheap-viagra.asp” top the CamelCased version “BuyCheapViagra.asp” when it comes to search engine rankings, but don’t freak out about keywords in URLs, that’s ranking factor #202 or so.)

Technically spoken, converting all file names, variable names and values as well to all-lowercase is the simplest solution. This way it’s quite easy to 301-redirect all invalid requests to the canonical URLs.

However, each redirect puts search engine traffic at risk. Not all search engines process 301 redirects as they should (MSN Live Search for example doesn’t follow permanent redirects and doesn’t pass the reputation earned by the old URL over to the new URL). So if you’ve good SERP positions for “misspelled” URLs, it might make sense to stick with ugly directory/file names. Check your search engine rankings, perform [site:example.com] search queries on all major engines, and read the SERP referrer reports from the old site’s server stats to identify all URLs you don’t want to redirect. By the way, the link reports in Google’s Webmaster Console and Yahoo’s Site Explorer reveal invalid URLs with (internal as well as external) inbound links too.

Whatever strategy fits your needs best, you’ve to call a script handling invalid URLs from your .htaccess file. You can do that with the ErrorDocument directive:
ErrorDocument 404 /404handler.php

That’s safe with static URLs without parameters and should work with dynamic URIs too. When you –in some cases– deal with query strings and/or virtual URIs, the .htaccess code becomes more complex, but handling virtual paths and query string parameters in the PHP scripts might be easier:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /404handler.php [L]
</IfModule>

In both cases Apache will process /404handler.php if the requested URI is invalid, that is if the path segment (/directory/file.extension) points to a file that doesn’t exist.

And here is the PHP script /404handler.php:
View|hide PHP code. (If you’ve disabled JavaScript you can’t grab the PHP source code!)
(Edit the values in all lines marked with “// change this”.)

This script doesn’t handle case issues with query string variables and values. Query string canonicalization must be developed for each individual site. Also, capturing misspelled URLs with nice search engine rankings should be implemented utilizing a database table when you’ve more than a dozen or so.

Lets see what the /404handler.php script does with requests of non-existing files.

First we test the requested URI for invalid URLs which are nicely ranked at search engines. We don’t care much about duplicate content issues when the engines deliver targeted traffic. Here is an example (which admittedly doesn’t rank for anything but illustrates the functionality): both /sample.asp as well as /Sample.asp deliver the same content, although there’s no /Sample.asp script. Of course a better procedure would be renaming /sample.asp to /Sample.asp, permanently redirecting /sample.asp to /Sample.asp in .htaccess, and changing all internal links accordinly.

Next we lookup the all lowercase version of the requested path. If such a file exists, we perform a permanent redirect to it. Example: /About.asp 301-redirects to /about.asp, which is the file that exists.

Finally, if everything we tried to find a suitable URI for the actual request failed, we send the client a 404 error code and output the error page. Example: /gimme404.asp doesn’t exist, hence /404handler.php responds with a 404-Not Found header and displays /error.asp, but /error.asp directly requested responds with a 200-OK.

You can easily refine the script with other algorithms and mappings to adapt its somewhat primitive functionality to your project’s needs.

Tweaking code for future maintenance

Legacy code comes with repetition, redundancy and duplication caused by developers who love copy+paste respectively copy+paste+modify, or Web design software that generates static files from templates. Even when you’re not willing to do a complete revamp by shoving your contents into a CMS, you must replace the ASP code anyway, what gives you the opportunity to encapsulate all templated page areas.

Say your design tool created a bunch of .asp files which all contain the same sidebars, headers and footers. When you move those files to your new server, create PHP include files from each templated page area, then replace the duplicated HTML code with <?php @include("header.php"); ?>, <?php @include("sidebar.php"); ?>, <?php @include("footer.php"); ?> and so on. Note that when you’ve HTML code in a PHP include file, you must add <?php ?> before the first line of HTML code or contents in included files. Also, leading spaces, empty lines and such which don’t hurt in HTML, can result in errors with PHP statements like header(), because those fail when the server has sent anything to the user agent (even a single space, new line or tab is too much).

It’s a good idea to use PHP scripts that are included at the very top and bottom of all scripts, even when you currently have no idea what to put into those. Trust me and create top.php and bottom.php, then add the calls (<?php @include("top.php"); ?> […] <?php @include("bottom.php"); ?>) to all scripts. Tomorrow you’ll write a generic routine that you must have in all scripts, and you’ll happily do that in top.php. The day after tomorrow you’ll paste the GoogleAnalytics tracking code into bottom.php. With complex sites you need more hooks.

Using absolute URLs on different systems

Another weak point is the use of relative URIs in links, image sources or references to feeds or external scripts. The lame excuse of most developers is that they need to test the site on their local machine, and that doesn’t work with absolute URLs. Crap. Of course it works. The first statement in top.php is
@require($_SERVER["SERVER_NAME"] .".php");

This way you can set the base URL for each environment and your code runs everywhere. For development purposes on a subdomain you’ve a “dev.example.com.php” include file, on the production system example.com the file name resolves to “www.example.com.php”:
<?php
$baseUrl = “http://example.com”;
?>

Then the menu in sidebar.php looks like:
<?php
$classVMenu = "vmenu";
print "
<img src=\"$baseUrl/vmenuheader.png\" width=\"128\" height=\"16\" alt=\"MENU\" />
<ul>
<li><a class=\"$classVMenu\" href=\"$baseUrl/\">Home</a></li>
<li><a class=\"$classVMenu\" href=\"$baseUrl/contact.asp\">Contact</a></li>
<li><a class=\"$classVMenu\" href=\"$baseUrl/sitemap.asp\">Sitemap</a></li>

</ul>
";
?>

Mixing X/HTML with server sided scripting languages is fault-prone and makes maintenance a nightmare. Don’t make the same mistake as WordPress. Avoid crap like that:
<li><a class="<?php print $classVMenu; ?>" href="<?php print $baseUrl; ?>/contact.asp"></a></li>

Error handling

I refuse to discuss IIS error handling. On Apache servers you simply put ErrorDocument directives in your root’s .htaccess file:
ErrorDocument 401 /get-the-fuck-outta-here.asp
ErrorDocument 403 /get-the-fudge-outta-here.asp
ErrorDocument 404 /404handler.php
ErrorDocument 410 /410-gone-forever.asp
ErrorDocument 503 /410-down-for-maintenance.asp
# …
Options -Indexes

Then create neat pages for each HTTP response code which explain the error to the visitor and offer alternatives. Of course you can handle all response codes with one single script:
ErrorDocument 401 /error.php?errno=401
ErrorDocument 403 /error.php?errno=403
ErrorDocument 404 /404handler.php
ErrorDocument 410 /error.php?errno=410
ErrorDocument 503 /error.php?errno=503
# …
Options -Indexes

Note that relative URLs in pages or scripts called by ErrorDocument directives don’t work. Don’t use absolute URLs in ErrorDocument directives itself, because this way you get 302 response codes for 404 errors and crap like that. If you cover the 401 response code with a fully qualified URL, your server will explode. (Ok, it will just hang but that’s bad enough.) For more information please read my pamphlet Why error handling is important.

Last but not least create a robots.txt file in the root. If you’ve nothing to hide from search engine crawlers, this one will suffice:
User-agent: *
Disallow:
Allow: /

I’m aware that this tiny guide can’t cover everything. It should give you an idea of the pitfalls and possible solutions. If you’re somewhat code-savvy my code snippets will get you started, but hire an expert when you plan to migrate a large site. And don’t view the source code of link-condom.com pages where I didn’t implement all tips from this tutorial. ;)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

MSN spam to continue says the Live Search Blog

MSN Live Search clueless webspam detectionIt seems MSN/LiveSearch has tweaked their rogue bots and continues to spam innocent Web sites just in case they could cloak. I see a rant coming, but first the facts and news.

Since August 2007 MSN runs a bogus bot faking a human visitor coming from a search results page, that follows their crawler. This spambot downloads everything from a page, that is images and other objects, external CSS/JS files, and ad blocks rendering even contextual advertising from Google and Yahoo. It fakes MSN SERP referrers diluting the search term stats with generic and unrelated keywords. Webmasters running non-adult sites wondered why a database tutorial suddenly ranks for [oral sex] and why MSN sends visitors searching for [MILF pix] to a teenager’s diary. Webmasters assumed that MSN is after deceitful cloaking, and laughed out loud because their webspam detection method was that primitive and easy to fool.

Now MSN admits all their sins –except the launch of a porn affiliate program– and posted a vague excuse on their Webmaster Blog telling the world that they discovered the evil cloakers and their index is somewhat spam free now. Donna has chatted with the MSN spam team about their spambot and reports that blocking its IP addresses is a bad idea, even for sites that don’t cloak. Vanessa Fox summarized MSN’s poor man’s cloaking detection at Search Engine Land:

And one has to wonder how effective methods like this really are. Those savvy enough to cloak may be able to cloak for this new cloaker detection bot as well.

They say that they no longer spam sites that don’t cloak, but reverse this statement telling Donna

we need to be able to identify the legitimate and illegitimate content

and Vanessa

sites that are cloaking may continue to see some amount of traffic from this bot. This tool crawls sites throughout the web — both those that cloak and those that don’t — but those not found to be cloaking won’t continue to see traffic.

Here is an excerpt from yesterdays referrer log of a site that does not cloak, and never did:
http://search.live.com/results.aspx?q=webmaster&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=smart&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=search&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=progress&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=google&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=google&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=domain&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=database&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=content&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=business&mrt=en-us&FORM=LIVSOP

Why can’t the MSN dudes tell the truth, not even when they apologize?

Another lie is “we obey robots.txt”. Of course the spambot doesn’t request it to bypass bot traps, but according to MSN it uses a copy served to the LiveSearch crawler “msnbot”:

Yes, this robot does follow the robots.txt file. The reason you don’t see it download it, is that we use a fresh copy from our index. The tool does respect the robots.txt the same way that MSNBot does with a caveat; the tool behaves like a browser and some files that a crawler would ignore will be viewed just like real user would.

In reality, it doesn’t help to block CSS/JS files or images in robots.txt, because MSN’s spambot will download them anyway. The long winded statement above translates to “We promise to obey robots.txt, but if it fits our needs we’ll ignore it”.

Well, MSN is not the only search engine running stealthy bots to detect cloaking, but they aren’t clever enough to do it in a less abusive and detectable way.

Their insane spambot led all cloaking specialists out there to their not that obvious spam detection methods. They may have caught a few cloaking sites, but considering the short life cycle of Webspam on throwaway domains they shot themselves in both feet. What they really have achieved is that the cloaking scripts are MSN spam detection immune now.

Was it really necessary to annoy and defraud the whole Webmaster community and to burn huge amounts of bandwidth just to catch a few cloakers who launched new scripts on new throwaway domains hours after the first appearance of the MSN spam bot?

Can cosmetic changes with regard to their useless spam activities restore MSN’s lost reputation? I doubt it. They’ve admitted their miserable failure five months too late. Instead of dumping the spambot, they announce that they’ll spam away for the foreseeable future. How silly is that? I thought Microsoft is somewhat profit orientated, why do they burn their and our money with such amateurish projects?

Besides all this crap MSN has good news too. Microsoft Live Search told Search Engine Roundtable that they’ll spam our sites with keywords related to our content from now on, at least they’ll try it. And they have a forum and a contact form to gather complaints. Crap on, so much bureaucratic efforts to administer their ridiculous spam fighting funeral. They’d better build a search engine that actually sends human traffic.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google to change the Robots Exclusion Protocol again

Google jumping the sharkWeb crawler directives, partly standardized in the Robots Exclusion Protocol (REP), evolved since 1994. Nowadays we’ve to deal with a conglomerate of not binding de facto standards and microformats, all of them extended by various organizations. All search engines claim that they obey “the standard”, but they refer to their very own REP implementation. In fact, each search engine supports a proprietary set of REP directives, differently from other players as a rule.

Google is the search engine putting the most efforts into Robots Exclusion Protocol (REP) evolvements. Their XML Sitemaps handling submissions instead of crawl restrictions changed the REP to a wider scope, the X-Robots-Tag brought us robots meta tags for non-HTML resources like PDF documents, images or video clips, and with Unavailable_after Google made a few clueless news sites happy. With the rel-nofollow microformat on the other hand, respectively its sneaky morphing from a spam fighting tool to its current shape, Google made nobody happy. Yahoo contributed the well meant but half-assed “robots-nocontent” class name, and of course “noydir” (it’s unlikely that any other engine will support those).

Now Google is working on new robots.txt syntax, and I am, politely put, not amused. Here is why I fear that Google is going to totally mess up the REP:

Google supports a “Noindex:” directive in robots.txt, which is treated as “Disallow:”1). Of course that’s an experiment, but if this behavior doesn’t change we’ll get a beast that is –with regard to the confusion it will produce– way more evil than the rel-nofollow fiasco.

  • A noindex-alias for disallow makes no sense, even when such syntax errors are out there.
  • Mixing crawler directives (allow/disallow) with indexer directives (noindex) is not always a bright idea. It’s bad enough that most Webmasters still believe that “Googlebot ranks their stuff”. (Actually, in some cases it can make sense. For example “nofollow” in robots meta tags (or at least for Google in REL attributes too) is both a crawler instruction as well as an indexer directive.)
  • Noindex and disallow are completely different commands. The REP’s noindex directive means “crawl it, follow its links, but don’t list it on the SERPs”. Disallow forbids crawling, but allows indexing URLs from directory listings or other inbound links.

Standards should be clear and unambiguous. Google must not redefine syntax and semantics that were in widespread use before Google even existed. I admit they’ve the power to fuck up the REP, but they also have “do no evil”.

Considering that Google is run by a bunch of smart engineers, I hope that they’ll do the right thing eventually. The right thing in this case is giving more power to REP evolvements, before questionable and selfish anti-search initiatives like ACAP ruin both the robots.txt consensus as well as the robots meta tag standard.

My idea of more power to REP evolvements is:

  • Sensible implementation of crawler/indexer-directives adapted from REP tags  in robots.txt. Applying page-level instructions ((no)index, (no)follow, noarchive, nosnippet, noodp/noydir, unavailable_after and hopefully nopreview) to groups of URIs is a great way to steer crawling and indexing, especially for sites which for various reasons cannot make use of the HTTP header’s X-Robots-Tag.
  • Implementation of block-level directives in robots.txt. Allowing Webmasters to apply crawler instructions like “noindex” or “nofollow” to particular page areas, like advertising blocks, duplicated text or repetitive navigation elements, addressed via HTML element names and class names and/or DOM-IDs, would be a very flexible instrument to steer crawling and indexing, and it could eleminate many points of failure.
  • Getting Webmasters, Publishers, SEOs and all major engines together to discuss possibly missing granularity and to develop a binding norm obeyed by all players.

The last one sounds like wishful thinking. The alternative is that Google (and, if possible, the bigger engines) talk with Webmasters and then launch the necessary REP extensions. The other engines will follow sooner or later. The publishers, although not getting all their desired ACAP restrictions, will be happy too. Standards like the Robots Exclusion Protocol should be developed by engineers.


1) Noindex: is not a plain Disallow:, there’s an interesting difference. In Google’s experiment both directives block crawling, but Disallow: allows URL-indexing based on 3rd party information, and Disallow:‘ed URLs can accumulate PageRank from internal as well as external links. Noindex:‘ed URLs on the other hand will not appear on SERPs as URL-only listing or with an ODP title and snippet, and I’m quite sure that they will not gather PageRank nor other link juice. That means links from any pages to such URLs get an implicit rel-nofollow in Google’s PageRank calculation, just like dangling links. This apparatus could be a great way to handle PageRank leaks (monthly blog archives, printer friendly pages and stuff like that), because shit happens, hence some links to such pages will slip through without condom. I admit that’s a neat idea, but its implementation is flawed because it doesn’t consider the implicit Follow: (that’s syntax Google doesn’t support in robots.txt). A better way to mark site areas which shall not gather PageRank without raping the REP would be a Norank: directive or so. Noindex: without a Nofollow: must not block crawling. Googlebot must fetch those URLs to follow their links.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Advantages of a smart robots.txt file

Write a smart robots.txtA loyal reader of my pamphlets asked me:

I foresee many new capabilities with robots.txt in the future due to this [Google’s robots.txt experiments]. However, how the hell can a webmaster hide their robots.txt from the public while serving it up to bots without doing anything shady?

That’s a great question. On this blog I’ve a static robots.txt, so I’ve set up a dynamic example using code snippets from other sites: this robots.txt is what a user sees, and here is what various crawlers get on request of my robots.txt example. Of course crawlers don’t request a robots.txt file with a query string identifying themselves (/robots.txt?crawlerName=*) like in the preview links above, so it seems you’ll need a pretty smart robots.txt file.

Before I tell you how to smarten a robots.txt file, lets define the anatomy of a somewhat intelligent robots.txt script:

  • It exists. It’s not empty. I’m not kidding.
  • A smart robots.txt detects and verifies crawlers to serve customized REP statements to each spider. Customized code means a section for the actual search engine, and general crawler directives. Example:
    User-agent: Googlebot-Image
    Disallow: /
    Allow: /cuties/*.jpg$
    Allow: /hunks/*.gif$
    Allow: /sitemap*.xml$
    Sitemap: http://example.com/sitemap-images.xml
     
    User-agent: *
    Disallow: /cgi-bin/

    This avoids confusion, because complex static robots.txt files with a section for all crawlers out there –plus a general section for other Web robots– are fault-prone, and might exceed the maximum file size some bots can handle. If you fuck up a single statement in a huge set of instructions, this may lead to the exitus of the process parsing your robots.txt, what results in no crawling at all, or possibly crawling of forbidden areas. Checking the syntax per engine with a lean robots.txt is way easier (supported robots.txt syntax: Google, Yahoo, Ask and MSN/LiveSearch - don’t use wildcards with MSN because they don’t really support them, that means at MSN wildcards are valid to match filetypes only).
  • A smart robots.txt reports all crawler requests. This helps with tracking when you change something. Please note that there’s a lag between the most recent request of robots.txt and the moment a search engine starts to obey it, because all engines cache your robots.txt.
  • A smart robots.txt helps identifying unknown Web robots, at least those which bother requesting it (ask Bill how to fondle rogue bots). From a log of suspect requests of your robots.txt you can decide whether particular crawlers need special instructions or not.
  • A smart robots.txt helps maintaining your crawler IP list.

Here is my step by step “how to create a smart robots.txt” guide. As always: if you suffer from IIS/ASP go search for reliable hosting (*ix/Apache).

In order to make robots.txt a script, tell your server to parse .txt files for PHP. (If you serve other .txt files than robots.txt, please note that you must add <?php ?> as first line to all .txt files on your server!) Add this line to your root’s .htaccess file:
AddType application/x-httpd-php .txt

Next grab the PHP code for crawler detection from this post. In addition to the functions checkCrawlerUA() and checkCrawlerIP() you need a function delivering the right user agent name, so please welcome getCrawlerName() in your PHP portfolio:

View|hide PHP code. (If you’ve disabled JavaScript you can’t grab the PHP source code!)

(If your instructions for Googlebot, Googlebot-Mobile and Googlebot-Image are identical, you can put them in one single “Googlebot” section.)

And here is the PHP script “/robots.txt”. Include the general stuff like functions, shared (global) variables and whatnot.
<?php
@require($_SERVER["DOCUMENT_ROOT"] ."/code/generalstuff.php");

Probably your Web server’s default settings aren’t suitable to send out plain text files, hence instruct it properly.
@header("Content-Type: text/plain");
@header("Pragma: no-cache");
@header("Expires: 0");

If a search engine runs wild requesting your robots.txt too often, comment out the “no-cache” and “expires” headers.

Next check whether the requestor is a verifiable search engine crawler. Lookup the host name and do a reverse DNS lookup.
$isSpider = checkCrawlerIP($requestUri);

Depending on $isSpider log the request either in a crawler log or an access log gathering suspect requests of robots.txt. You can store both in a database table, or in a flat file if you operate a tiny site. (Write the logging function yourself.)
$standardStatement = "User-agent: * \n Disallow: /cgi-bin/ \n\n";
print $standardStatement;
if ($isSpider) {
$lOk = writeRequestLog("crawler");
$crawlerName = getCrawlerName();
}
else {
$lOk = writeRequestLog("suspect");
exit;
}

If the requestor is not a search engine crawler you can verify, send a standard statement to the user agent and quit. Otherwise call getCrawlerName() to name the section for the requesting crawler.

Now you can output individual crawler directives for each search engine, respectively their specialized crawlers.
$prnUserAgent = "User-agent: ";
$prnContent = "";
if ("$crawlerName" == "Googlebot-Image") {
$prnContent .= "$prnUserAgent $crawlerName\n";
$prnContent .= "Disallow: /\n";
$prnContent .= "Allow: /cuties/*.jpg$\n";
$prnContent .= "Allow: /hunks/*.gif$\n";
$prnContent .= "Allow: /sitemap*.xml$\n";
$prnContent .= "Sitemap: http://example.com/sitemap-images.xml\n\n";
}
if ("$crawlerName" == "Mediapartners-Google") {
$prnContent .= "$prnUserAgent $crawlerName \n Disallow:\n\n";
}

print $prnContent;
?>

Say the user agent is Googlebot-Image, the code above will output this robots.txt:
User-agent: *
Disallow: /cgi-bin/
 
User-agent: Googlebot-Image
Disallow: /
Allow: /cuties/*.jpg$
Allow: /hunks/*.gif$
Allow: /sitemap*.xml$
Sitemap: http://example.com/sitemap-images.xml

(Please note that crawler sections must be delimited by an empty line, and that if there’s a section for a particular crawler, this spider will ignore the general directives. Please consider reading more pamphlets discussing robots.txt and dull stuff like that.)

That’s it. Adapt. Enjoy.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Validate your robots.txt - Googlebot becomes smarter

Validate your robots.txt!Last week I reported that Google experiments with new crawler directives for use in robots.txt. Today Google has confirmed that Googlebot understands experimental REP syntax like Noindex:.

That means that forgotten –and, until recently, ignored– statements in your robots.txt might change the crawler’s behavior all of a sudden, without notice. I don’t know for sure which experimental crawler directives Google has implemented, but for example a line like
Noindex: /
in your robots.txt will now deindex your complete Web site.

“Noindex:” is not defined in the Robots Exclusion Protocol from 1994, and not mentioned in Google’s official documents.

John Müller from Google Zürich states:

At the moment we will usually accept the “noindex” directive in the robots.txt, but we are not yet at a point where we are willing to set it into stone and announce full support.

[…] I just want to remind everyone again that this is something that may still change over time. Be careful when playing with things like this.

My understanding of “be careful” is:

  • Create a separate section for Googlebot. Do not rely on directives addressing all Web robots. Especially when you’ve a Googlebot section already, Google’s crawler will ignore directives set under “all user agents” and process only the Googlebot section. Repeat all statements under User-agent: * in User-agent: Googlebot to make sure that Googlebot obeys them.
  • RTFM
  • Do not use other crawler directives than
    Disallow:
    Allow:
    Sitemap:
    in the Googlebot section.
  • Don’t mess-up pattern matching.
    * matches a sequence of characters
    $ specifies the end of the URL
    ? separates the path from the query string, you can’t use it as wildcard!
  • Validate your robots.txt with the cool robots.txt analyzer in your Google Webmaster Console.

Folks put the funniest stuff into their robots.txt, for example images or crawl delays like “Don’t crawl this site during our office hours”. Crawler directives from robots meta tags aren’t very popular, but they appear in many robots.txt files. Hence it makes sound sense to use what people express, regardless the syntax errors.

Also, having the opportunity to manage page specific crawler directives like “noindex”, “nofollow”, “noarchive” and perhaps even “nopreview” on site level is a huge time saver, and eliminates many points of failure. Kudos to Google for this initiative, I hope it will make it into the standards.

I’ll test the experimental robots.txt directives and post the results. Perhaps I’ll set up a live test like this one.

Take care.


Update: Here is the live test of suspected respectively desired new crawler directives for robots.txt. I’ve added a few unusual statements to my robots.txt and uploaded scripts to monitor search engine crawling. The test pages provide links to search queries so you can check whether Google indexed them or not.

Please don’t link to the crawler traps, I’ll update this post with my findings. Of course I appreciate links, so here is the canonical URL:
http://sebastians-pamphlets.com/validate-your-robots-txt-or-google-might-deindex-your-site/#live-robots-txt-test

Please note that you should not make use of the crawler directives below on production systems! Bear in mind that you can achive all that with simple X-Robots-Tags in the HTTP headers. That’s a bullet-proof way to apply robots meta tags to files without touching them, and it works with virtual URIs too. X-Robots-Tags are sexy, but many site owners can’t handle them due to various reasons, whereas corresponding robots.txt syntax would be usable for everybody (not suffering from restrictive and/or free hosts).

Noindex:

robots.txt:
Noindex: /repstuff/noindex.php

Expected behavior:
No crawling/indexing. It seems Google interprets “Nofollow:” as “Disallow:”.
Desired behavior:
“Follow:” is the REP’s default, hence Google should fetch everything and follow the outgoing links, but shouldn’t deliver Noindex’ed contents on the SERPs, not even as URL-only listings.
Google’s robots.txt validator:
http://sebastians-pamphlets.com/repstuff/noindex.php Blocked by line 30: Noindex: /repstuff/noindex.php
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled (possibly caused by an outdated robots.txt cache).
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noindex.php.
2007-11-23: indexed and cached a page linked only from noindex.php.
(If an outdated robots.txt cache falsely allowed crawling, the search result(s) should disappear shortly after the next crawl.)
2007-11-26: deindexed, the same goes for the linked page (without recrawling).
2007-12-07: appeared under “URLs restricted by robots.txt” in GWC.
2007-12-17: I consider this case closed. Noindex: blocks crawling, deindexes previously indexed pages, and is suspected to block incoming PageRank.

Nofollow:

robots.txt:
Nofollow: /repstuff/nofollow.php

Expected behavior:
Crawling, indexing, and following the links as if there’s no “Nofollow:”.
Desired behavior:
Crawling, indexing, and ignoring outgoing links.
Google’s robots.txt validator:
Line 31: Nofollow: /repstuff/nofollow.php Syntax not understood
http://sebastians-pamphlets.com/repstuff/nofollow.php Allowed
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from nofollow.php (21 Nov 2007 23:19:37 GMT, for some reason not logged properly).
2007-11-23: indexed and cached a page linked only from nofollow.php.
2007-11-26: recrawled, deindexed, no longer cached. The same goes for the linked page.
2007-11-28: cached again, the timestamp on the cached copy “27 Nov 2007 01:11:12 GMT” doesn’t match the last crawl on “2007-11-26 16:47:11 EST” (EST = GMT-5).
2007-12-07: recrawled, still deindexed, cached. Linked page recrawled, cached.
2007-12-17: recrawled, still deindexed (probably caused by near duplicate content on noarchive.php and other pages involved in this test), cached copy dated 2007-12-07. Cache of the linked page still dated 2007-11-21. I consider this case closed. Nofollow: doesn’t work as expected, Google doesn’t support this statement.

Noarchive:

robots.txt:
Noarchive: /repstuff/noarchive.php

Expected behavior:
Crawling, indexing, following links, but no “Cached” links on the SERPs and no access to cached copies from the toolbar.
Desired behavior:
Crawling, indexing, following links, but no “Cached” links on the SERPs and no access to cached copies from the toolbar.
Google’s robots.txt validator:
http://sebastians-pamphlets.com/repstuff/noarchive.php Allowed
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noarchive.php.
2007-11-23: indexed and cached a page linked only from noarchive.php.
2007-11-26: recrawled, deindexed, no longer cached. The linked page was deindexed without recrawling.
2007-11-28: cached again, the timestamp on the cached copy “27 Nov 2007 01:11:19 GMT” doesn’t match the last crawl on “2007-11-26 16:47:18 EST” (EST = GMT-5).
2007-11-29: recrawled, cache not yet updated.
2007-12-07: recrawled. Linked page recrawled.
2007-12-08: recrawled.
2007-12-11: recrawled the linked page, which is cached but not indexed.
2007-12-12: recrawled.
2007-12-17: still indexed, cached copy dated 2007-12-08. I consider this case closed. Noarchive: doesn’t work as expected, actually it does nothing although according to the robots.txt validator that’s supported –or at least known and accepted– syntax.

(It looks like Google understands Nosnippet: too, but I didn’t test that.)

Nopreview:

robots.txt:
Nopreview: /repstuff/nopreview.pdf

Expected behavior:
None, unfortunately.
Desired behavior:
No “view as HTML” links on the SERPs. Neither “nosnippet” nor “noarchive” suppress these helpful preview links, which can be pretty annoying in some cases. See NOPREVIEW: The missing X-Robots-Tag.
Google’s robots.txt validator:
Line 33: Nopreview: /repstuff/nopreview.pdf Syntax not understood
http://sebastians-pamphlets.com/repstuff/nopreview.pdf Allowed
Status:
Crawler requests of nopreview.pdf are logged here.
Google’s crawler / indexer:
2007-11-21: crawled the nopreview-pdf and the log page nopreview.php.
2007-11-23: indexed and cached the log file nopreview.php.
[2007-11-23: I replaced the PDF document with a version carrying a hidden link to an HTML file, and resubmitted it via Google’s add-url page and a sitemap.]
2007-11-26: The old version of the PDF is cached as a “view-as-HTML” version without links (considering the PDF was a captured print job, that’s a pretty decent result), and appears on SERPs for a quoted search. The page linked from the PDF and the new PDF document were not yet crawled.
2007-12-02: PDF recrawled. Googlebot followed the hidden link in the PDF and crawled the linked page.
2007-12-03: “View as HTML” preview not yet updated, the linked page not yet indexed.
2007-12-04: PDF recrawled. The preview link reflects the content crawled on 12/02/2007. The page linked from the PDF is not yet indexed.
2007-12-07: PDF recrawled. Linked page recrawled.
2007-12-09: PDF recrawled.
2007-12-10: recrawled linked page.
2007-12-14: PDF recrawled. Cached copy of the linked page dated 2007-12-11.
2007-12-17: I consider this case closed. Neither Nopreview: nor Noarchive: (in robots.txt since 2007-12-04) are suitable to suppress the HTML preview of PDF files.

Noindex: Nofollow:

robots.txt:
Noindex: /repstuff/noindex-nofollow.php
Nofollow: /repstuff/noindex-nofollow.php

Expected behavior:
No crawling/indexing, invisible on SERPs.
Desired behavior:
No crawling/indexing, and no URL-only listings, ODP titles/descriptions and stuff like that on the SERPs. “Noindex:” in combination with “Nofollow:” is a paraphrased “Disallow:”.
Google’s robots.txt validator:
http://sebastians-pamphlets.com/repstuff/noindex-nofollow.php Blocked by line 35: Noindex: /repstuff/noindex-nofollow.php
Line 36: Nofollow: /repstuff/noindex-nofollow.php Syntax not understood
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noindex-nofollow.php.
2007-11-23: indexed and cached a page linked only from noindex-nofollow.php.
2007-11-26: deindexed without recrawling, the same goes for the linked page.
2007-11-29: the cached copy retrieved on 11/21 reappeared.
2007-12-08: appeared under “URL restricted by robots.txt” in my GWC acct.
2007-12-17: Case closed, see Noindex: above.

Noindex: Follow:

robots.txt:
Noindex: /repstuff/noindex-follow.php
Follow: /repstuff/noindex-follow.php

Expected behavior:
No crawling/indexing, hence unfollowed links.
Desired behavior:
Crawling, following and indexing outgoing links, but no SERP listings.
Google’s robots.txt validator:
http://sebastians-pamphlets.com/repstuff/noindex-follow.php Blocked by line 38: Noindex: /repstuff/noindex-follow.php
Line 39: Follow: /repstuff/noindex-follow.php Syntax not understood
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from noindex-follow.php.
2007-11-23: indexed and cached a page linked only from noindex-follow.php.
2007-11-26: deindexed without recrawling, the same goes for the linked page.
2007-12-08: appeared under “URL restricted by robots.txt” in my GWC acct.
2007-12-17: Case closed, see Noindex: above. Google didn’t crawl respectively deindexed despite the Follow: directive.

Index: Nofollow:

robots.txt:
Index: /repstuff/index-nofollow.php
Nofollow: /repstuff/index-nofollow.php

Expected behavior:
Crawling/indexing, following links.
Desired behavior:
Crawling/indexing but ignoring outgoing links.
Google’s robots.txt validator:
Line 41: Index: /repstuff/index-nofollow.php Syntax not understood
Line 42: Nofollow: /repstuff/index-nofollow.php Syntax not understood
http://sebastians-pamphlets.com/repstuff/index-nofollow.php Allowed
Status:
See test page
Google’s crawler / indexer:
2007-11-21: crawled.
2007-11-23: indexed and cached.
2007-11-21: crawled a page linked only from from index-nofollow.php.
2007-11-23: indexed and cached a page linked only from from index-nofollow.php.
2007-11-26: recrawled and deindexed. The linked page was deindexed witout recrawling.
2007-11-28: cached again, the timestamp on the cached copy “27 Nov 2007 01:11:26 GMT” doesn’t match the last crawl on “2007-11-26 16:47:25 EST” (EST = GMT-5).
2007-12-02: recrawled, the cached copy has vanished.
2007-12-07: recrawled. Linked page recrawled.
2007-12-08: recrawled.
2007-12-09: recrawled.
2007-12-10: recrawled.
2007-12-17: cached under 2007-12-10, not indexed. Linked page not cached, not indexed. I consider this case closed. Google currently doesn’t support Index: nor Nofollow:.

(I didn’t test Noodp: and Unavaliable_after [RFC 850 formatted timestamp]:, although both directives would make sense in robots.txt too.)

2007-11-20:
Added the experimental statements to robots.txt.

2007-11-21:
Linked the test pages. Google crawled all of them, including the pages submitted via links on test pages.

2007-11-23:
Most (all but the PDF document) URLs appear on search result pages. If an outdated robots.txt cache falsely allowed crawling although the WC-validator said “Blocked”, the search results should disappear shortly after the next crawl. I’ve created a sitemap for all URLs above and submitted it. Although I’ve –for the sake of this experiment– cloaked text as well as links and put white links on white background, luckily there is no “we caught you black hat spammer” message in my Webmaster Console. Googlebot nicely followed the cloaked links and indexed everything.

2007-11-26:
Google recrawled a few pages (noarchive.php, index-nofollow.php and nofollow.php), then deindexed all of them. Only the PDF document is indexed, and Google created a “view-as-HTML” preview from this captured print job. It seems that Google crawled something from another host than “*.googlebot.com”, unfortunately I didn’t log all requests. Probably the deindexing was done by a sneaky bot discovering the simple cloaking. Since the linked URLs are out and 3rd party links to them can’t ruin the experiment any longer, I’ve stopped cloaking and show the same text/links to bots and users (actually, users see one more link but that should be fine with Google). There’s still no “thou shalt not cloak” message in my GWC account. Well, those pages are fairly new, perhaps not fully settled in the search index, so lets see what happens next.

2007-11-28
The PDF file as well as the three pages recrawled on 11/26/2007 21:45:00 GMT were reindexed, but the timestamp on the cached copies says “retrieved on 27 Nov 2007 01:15:00 GMT”. Maybe the date/time displayed on cached page copies doesn’t reflect Ms. Googlebot’s “fetched” timestamp, but the time the indexer pulled the page out of the centralized crawl results cache 3.5 hours after crawling.

It seems the “Noarchive:” directive doesn’t work, because noarchive.php was crawled and indexed twice providing a cached page copy. My “Nopreview:” creation isn’t supported either, but maybe Dan Crow’s team picks it up for a future update of their neat X-Robots-Tags (I hope so).

The noindex’ed pages (noindex.php, noindex-nofollow.php and noindex-follow.php) weren’t recrawled and remain deindexed. Interestingly, they don’t appear under “URLs blocked by robots.txt” in my GWC account. Provided the first crawling and indexing on 11/21/2007 was a “mistake” caused by a way too long cached robots.txt, and the second crawl on 11/26/2007 obeyed the “Noindex:” but ignored the (implicit) “Follow:”, it seems that indeed Google interprets “Noindex:” in robots.txt as “Disallow:”. If that is so and if it’s there to stay, they’re going to totally mess up the REP.

<rant> I mean, promoting a rel-nofollow microformat that –at least at launchtime– didn’t share its semantics with the REP’s meta tags nor the –later introduced– X-Robots-Tags was evil bad enough. Ok, meanwhile they’ve corrected this conspiracy flaw by altering the rel-nofollow semantics step by step until “nofollow” in the REL attribute actually means nofollow  and no longer pass no reputation, at least at Google. Other engines still handle rel-nofollow according to the initial and officially still binding standard, and a gazillion Webmasters are confused as hell. In other words only a few search geeks understand what rel-nofollow is all about, but Google jauntily penalizes the great unwashed for not complying to the incomprehensible. By the way, that’s why I code rel="nofollow crap". Standards should be clear and unambiguous. </rant>

If Google really would introduce a “Noindex:” directive in robots.txt that equals “Disallow:”, that would be totally evil. A few sites out there might have an erroneous “Noindex:” statement in their robots.txt that could mean “Disallow:”, and it’s nice that Google tries to do them a favor. Screwing the REP for the sole purpose of complying to syntax errors on the other hand makes no sense. “Noindex” means crawl it, follow its links, but don’t index it. Semantically “Noindex: Nofollow:” equals “Disallow:”, but a “Noindex:” alone implies a “Follow:”, hence crawling is not only allowed but required.

I really hope that we watch an experiment in its early stage, and that Google will do the right thing eventually. Allowing the REP’s page specific crawler directives in robots.txt is a fucking brilliant move, because technically challenged publishers can’t handle the HTTP header’s X-Robots-Tag, and applying those directives to groups of URIs is a great method to steer crawling and indexing not only with static sites.

Dear Google engineers, please consider the nopreview directive too, and implement (no)index, (no)follow, noarchive, nosnippet, noodp/noydir and unavailable_after with the REP’s meaning. And while you’re at it, I want block level instructions in robots.txt too. For example
Area: /products/ DIV.hMenu,TD#bNav,SPAN.inherited "noindex,nofollow"

could instruct crawlers to ignore duplicated properties in product descriptions and the horizontal menu as well as the navigation elements in a table cell with the DOM-ID “bNav” at the very bottom of all pages in /products/,
Area: / A.advertising REL="nofollow"

could condomize all links with the class name “advertising”, and so on.

2007-11-29
The pages linked from the test pages still don’t come up in search results, noarchive.php was recrawled and remains cached, the cached copy of noindex-nofollow.php retrieved on 11/21/2007 reappeared (probably a DC roller coaster issue).

2007-11-30
Three URLs remain indexed: nopreview.pdf, noarchive.php and noindex-nofollow.php. The cached copies show the content crawled on Nov/21/2007. Everything else is deindexed. That’s not to stay (index roller coaster).
As a side note: the URL from my first noindex-robots.txt test appeared in my GWC account under “URLs restricted by robots.txt (Nov/27/2007)”, three days after the unsuccessful crawl.

2007-12-02
A few pages were recrawled, Googlebot followed the hidden link in the PDF file.

2007-12-03
In my GWC crawl stats noindex-nofollow.php appeared under “URLs restricted by robots.txt”, but it’s still indexed.

2007-12-04
The preview (cache) of nopreview.pdf was updated. Since obviously Nopreview: doesn’t work, I’ve added
Noarchive: /repstuff/nopreview.pdf

to my robots.txt. Lets see whether Google removes the cache respectively the HTML preview or not.

2007-12-06
Shortly after the change in robots.txt (Noarchive: /repstuff/nopreview.pdf) Googlebot recrawled the PDF file on 12/04/2007. Today it’s still cached, the HTML preview is still available and linked from SERPs.

2007-12-07
Googlebot has recrawled a few pages. Everything except noarchive.php and nopreview.pdf is deindexed.

2007-12-17
I consider the test closed, but I’ll keep the test pages up so that you can monitor crawling and indexing yourself. Noindex: is the only directive that somewhat works, but it’s implemented completely wrong and is not acceptable in its current shape.

Interestingly the sitemaps report in my GWC account says that 9 pages from 9 submitted URLs were indexed. Obviously “indexed” means something like “crawled at least once, perhaps indexed, maybe not, so if you want to know that definitively then get your lazy butt to check the SERPs yourself”. How expensive would it be to tell something like “Total URLs in sitemap: 9 | Indexed URLs in sitemap: 2″?



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Microsoft funding bankrupt Live Search experiment with porn spam

If only this headline would be linkbait … of course it’s not sarcastic.

M$ PORN CASHRumors are out that Microsoft will launch a porn affiliate programm soon. The top secret code name for this project is “pornbucks”, but analysts say that it will be launched as “M$ SMUT CASH” next year or so.

Since Microsoft just can’t ship anything in time, and the usual delays aren’t communicated internally, their search dept. began to promote it to Webmasters this summer.

Surprisingly, Webmasters across the globe weren’t that excited to find promotinal messages from Live Search in their log files, so a somewhat confused MSN dude posted a lame excuse to a large Webmaster forum.

Meanwhile we found out that Microsoft Live Search does not only target the adult entertainment industry, they’re testing the waters with other money terms like travel or pharmaceutic products too.

Anytime soon the Live Search menu bar will be updated to something like this:
Live Search Porn Spam Menu

Here is the sad –but true– story of a search engine’s downfall.

A few months ago Microsoft Live Search discovered that x-rated referrer spam is a must-have technique in a sneaky smut peddlar’s marketing toolbox.

Since August 2007 a bogus Web robot follows Microsoft’s search engine crawler “MSNbot” to spam the referrer logs of all Web sites out there with URLs pointing to MSN search result pages featuring porn.

Read your referrer logs and you’ll find spam from Microsoft too, but perhaps they peeve you with viagra spam, offer you unwanted but cheap payday loans, or try to enlarge your penis. Of course they know every trick in the book on spam, so check for harmless catchwords too. Here is an example URL:
http://search.live.com/results.aspx?q= spammy-keyword &mrt=en-us&FORM=LIVSOP

Microsoft’s spam bot not only leaves bogus URLs in log files, hoping that Webmasters will click them on their referrer stats pages and maybe sign up for something like “M$ Porn Bucks” or so. It downloads and renders even adverts powered by their rival Google, lowering their CTR; obviously to make programs like AdSense less attractive im comparison with Microsoft’s own ads (sorry, no link love from here).

Let’s look at Microsoft’s misleading statement:

The traffic you are seeing is part of a quality check we run on selected pages. While we work on addressing your conerns, we would request that you do not actively block the IP addreses used by this quality check; blocking these IP addresses could prevent your site from being included in the Live Search index.

  • That’s not traffic, that’s bot activity: These hits come within seconds of being indexed by MSNBot. The pattern is like this: the page is requested by MSNBot (which is authenticated, so it’s genuine) and within a few seconds, the very same page is requested with a live.com search result URL as referer by the MSN spam bot faking a human visitor.
  • If that’s really a quality check to detect cloaking, that’s more than just lame. The IP addresses don’t change, the bogus bot uses a static user agent name, and there are other footprints which allow every cloaking script out there to serve this sneaky bot the exact same spider fodder that MSNbot got seconds before. This flawed technique might catch poor man’s cloaking every once in a while, but it can’t fool savvy search marketers.
  • The FUD “could prevent your site from being included in the Live Search index” is laughable, because in most niches MSN search traffic is not existent.

All major search engines, including MSN, promise that they obey the robots exclusion standard. Obeying robots.txt is the holy grail of search engine crawling. A search engine that ignores robots.txt and other normed crawler directives cannot be trusted. The crappy MSN bot not even bothers to read robots.txt, so there’s no chance to block it with standardized methods. Only IP blocking can keep it out, but then it still seems to download ads from Google’s AdSense servers by executing the JavaScript code that the MSN crawler gathered before (not obeying Google’s AdSense robots.txt as well).

This unethical spam bot downloading all images, external CSS and JS files, and whatnot also burns bandwidth. That’s plain theft.

Since this method cannot detect (most) cloaking, and the so called “search quality control bot” doesn’t stop visiting sites which obviously do not cloak, it is a sneaky marketing tool. Whether or not Microsoft Live Search tries to promote cyberspace porn and on-line viagra shops plays no role. Even spamming with safe-at-work keywords is evil. Do these assclowns really believe that such unethical activities will increase the usage of their tiny and pretty unpopular search engine? Of course they do, otherwise they would have shutted down the spam bot months ago.

Dear reader, please tell me: what do you think of a search engine that steals (bandwidth and AdSense revenue), lies, spams away, and is not clever enough to stop their criminal activities when they’re caught?

Recently a Live Search rep whined in an interview because so many robots.txt files out there block their crawler:

One thing that we noticed for example while mining our logs is that there are still a fair number of sites that specifically only allow Googlebot and do not allow MSNBot.

There’s a suitable answer, though. Update your robots.txt:

User-agent: MSNbot
Disallow: /



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Q&A: An undocumented robots.txt crawler directive from Google

What's the fuss about noindex in Google's robots.txt?Blogging should be fun every now and then. Today I don’t tell you anything new about Google’s secret experiments with the robots exclusion protocol. I ask you instead, because I’m sure you know your stuff. Unfortunately, the Q&A on undocumented robots.txt syntax from Google’s labs utilizes JavaScript, so perhaps it looks somewhat weird in your feed reader.

Q: Please look at this robots.txt file and figure out why it’s worth a Q&A with you, my dear reader:


User-Agent: *
Disallow: /
Noindex: /

Ok, click here to show the first hint.

I know, this one was a breeze, so here comes your challenge.
Q: Which crawler directive used in the robots.txt above was introduced 1996 in the Robots Exclusion Protocol (REP), but was not defined in its very first version from 1994?

Ok, click here to show the second hint.

Congrats, you are smart. I’m sure you don’t need to lookup the next answers.
Q: Which major search engine has a team permanently working on REP extensions and releases those quite frequently, and who is the engineer in charge?

Ok, click here to show the third hint.

Exactly. Now we’ve gathered all the pieces of this robots.txt puzzle.
Q: Could you please summarize your cognitions and conclusions?

Ok, click here to show the fourth hint.

Thank you, dear reader! Now lets see what we can dig out. If the appearance of a “Noindex:” directive in robots.txt is an experiment, it would make sense that Ms. Googlebot understands and obeys it. Unfortunetely, I sold all the source code I’ve stolen from Google and didn’t keep a copy for myself, so I need to speculate a little.

Last time I looked, Google’s cool robots.txt validator emulated crawler behavior, that means that the crawlers understood syntax the validator didn’t handle correctly. Maybe this was changed in the meantime, perhaps the validator pulls its code from the “real thing” now, or at least the “Noindex:” experiment may have found its way into the validator’s portfolio. So I thought that testing the newish robots.txt statement “Noindex:” in the Webmaster Console is worth a try. And yes, it told me that Googlebot understands this command, and interprets it as “Disallow:”.
Blocked by line 27: Noindex: /noindex/

Since validation is no proof of crawler behavior, I’ve set up a page “blocked” with a “Noindex:” directive in robots.txt and linked it in my sidebar. The noindex statement was in place long enough before I’ve uploaded and linked the spider trap, so that the engines shouldn’t use a cached robots.txt when they follow my links. My test is public, feel free to check out my robots.txt as well as the crawler log.

While I’m waiting for the expected growth of my noindex crawler log, I’m speculating. Why the heck would Google use a new robots.txt directive which behaves like the good old Disallow: statement? Makes no sense to me.

Lets not forget that this mysterious noindex statement was discovered in the robots.txt of Google’s ad server, not in the better known and closely watched robots.txt of google.com. Google is not the only search engine trying to better understand client sided code. None of the major engines should be interested in crawling ads for ranking purposes. The MSN/LiveSearch referrer spam fiasco demonstrates that search engine bots can fetch and render Google ads outputted in iFrames on pagead2.googlesyndication.com.

Since nobody supports Google’s X-Robots-Tag (sending “noindex” and other REP directives in the HTTP header) until today, maybe the engines have a silent deal that content marked with “Noindex:” in robots.txt shouldn’t be indexed. Microsoft’s bogus spam bot which doesn’t bother with robots.txt because it somewhat hapless tries to emulate a human surfer is not considered a crawler, it’s existence just proves that “software shop” is not a valid label for M$.

This theory has a few weak points, but it could point to something. If noindex in robots.txt really prevents from indexing of contents crawled by accident, or non-HTML contents that can’t supply robots meta tags, that would be a very useful addition to the robots exclusion protocol. Of course we’d then need Noarchive:, Nofollow: and Nopreview: too, probably more but I’m not really in a greedy mood today.

Back to my crawler trap. Refreshing the log reveals that 30 minutes after spreading links pointing to it, Googlebot has fetched the page. That seems to prove that the Noindex: statement doesn’t prevent from crawling, regardless the false (?) information handed out by Google’s robots.txt validator.

(Or didn’t I give Ms. Googlebot enough time to refetch my robots.txt? Dunno. The robots.txt copy in my Google Webmaster Console still doesn’t show the Noindex: statement, but I doubt that’s the version Googlebot uses because according to the last-downloaded timestamp in GWC the robots.txt has been changed at the time of the download. Never mind. If I was way too impatient, I still can test whether a newly discovered noindex directive in robots.txt actually deindexes stuff or not.)

On with the show. The next interesting question is: Will the crawler trap page make it in Google’s search index? Without the possibly non-effective noindex directive a few hundred links should be able to accomplish that. Alas, a quoted search query delivers zilch so far.

Of course I’ve asked Google for more information, but didn’t receive a conclusive answer so far. While waiting for an official statement, I take a break from live blogging this quick research in favor of terrorizing a few folks with respectless blog comments. Stay tuned. Be right back.


Well, meanwhile I had dinner, the kids fell asleep –hopefully until tomorrow morning–, but nothing else happened. A very nice and friendly Googler tries to find out what the noindex in robots.txt fuss is all about, thanks and I can’t wait! However, I suspect the info is either forgotten or deeply buried in some well secured top secret code libraries, hence I’ll push the red button soon.


Thanks to Google’s great Webmaster Central team, especially Susan, I learned that I was flogging a dead horse. Here is Google’s take on Noindex in robots.txt:

As stated in my previous note, I wasn’t aware that we recognized any directives other than Allow/Disallow/Sitemap, so I did some asking around.

Unfortunately, I don’t have an answer that I can currently give you. […] I can’t contribute any clarifications right now.

Thank you Susan!

Update: John Müller from Google has just confirmed that their crawler understands the Noindex: syntax, but it’s not yet set in stone.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28  Next Page »