Archived posts from the 'MSN' Category

MSN spam to continue says the Live Search Blog

MSN Live Search clueless webspam detectionIt seems MSN/LiveSearch has tweaked their rogue bots and continues to spam innocent Web sites just in case they could cloak. I see a rant coming, but first the facts and news.

Since August 2007 MSN runs a bogus bot faking a human visitor coming from a search results page, that follows their crawler. This spambot downloads everything from a page, that is images and other objects, external CSS/JS files, and ad blocks rendering even contextual advertising from Google and Yahoo. It fakes MSN SERP referrers diluting the search term stats with generic and unrelated keywords. Webmasters running non-adult sites wondered why a database tutorial suddenly ranks for [oral sex] and why MSN sends visitors searching for [MILF pix] to a teenager’s diary. Webmasters assumed that MSN is after deceitful cloaking, and laughed out loud because their webspam detection method was that primitive and easy to fool.

Now MSN admits all their sins –except the launch of a porn affiliate program– and posted a vague excuse on their Webmaster Blog telling the world that they discovered the evil cloakers and their index is somewhat spam free now. Donna has chatted with the MSN spam team about their spambot and reports that blocking its IP addresses is a bad idea, even for sites that don’t cloak. Vanessa Fox summarized MSN’s poor man’s cloaking detection at Search Engine Land:

And one has to wonder how effective methods like this really are. Those savvy enough to cloak may be able to cloak for this new cloaker detection bot as well.

They say that they no longer spam sites that don’t cloak, but reverse this statement telling Donna

we need to be able to identify the legitimate and illegitimate content

and Vanessa

sites that are cloaking may continue to see some amount of traffic from this bot. This tool crawls sites throughout the web — both those that cloak and those that don’t — but those not found to be cloaking won’t continue to see traffic.

Here is an excerpt from yesterdays referrer log of a site that does not cloak, and never did:
http://search.live.com/results.aspx?q=webmaster&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=smart&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=search&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=progress&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=google&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=google&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=domain&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=database&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=content&mrt=en-us&FORM=LIVSOP
http://search.live.com/results.aspx?q=business&mrt=en-us&FORM=LIVSOP

Why can’t the MSN dudes tell the truth, not even when they apologize?

Another lie is “we obey robots.txt”. Of course the spambot doesn’t request it to bypass bot traps, but according to MSN it uses a copy served to the LiveSearch crawler “msnbot”:

Yes, this robot does follow the robots.txt file. The reason you don’t see it download it, is that we use a fresh copy from our index. The tool does respect the robots.txt the same way that MSNBot does with a caveat; the tool behaves like a browser and some files that a crawler would ignore will be viewed just like real user would.

In reality, it doesn’t help to block CSS/JS files or images in robots.txt, because MSN’s spambot will download them anyway. The long winded statement above translates to “We promise to obey robots.txt, but if it fits our needs we’ll ignore it”.

Well, MSN is not the only search engine running stealthy bots to detect cloaking, but they aren’t clever enough to do it in a less abusive and detectable way.

Their insane spambot led all cloaking specialists out there to their not that obvious spam detection methods. They may have caught a few cloaking sites, but considering the short life cycle of Webspam on throwaway domains they shot themselves in both feet. What they really have achieved is that the cloaking scripts are MSN spam detection immune now.

Was it really necessary to annoy and defraud the whole Webmaster community and to burn huge amounts of bandwidth just to catch a few cloakers who launched new scripts on new throwaway domains hours after the first appearance of the MSN spam bot?

Can cosmetic changes with regard to their useless spam activities restore MSN’s lost reputation? I doubt it. They’ve admitted their miserable failure five months too late. Instead of dumping the spambot, they announce that they’ll spam away for the foreseeable future. How silly is that? I thought Microsoft is somewhat profit orientated, why do they burn their and our money with such amateurish projects?

Besides all this crap MSN has good news too. Microsoft Live Search told Search Engine Roundtable that they’ll spam our sites with keywords related to our content from now on, at least they’ll try it. And they have a forum and a contact form to gather complaints. Crap on, so much bureaucratic efforts to administer their ridiculous spam fighting funeral. They’d better build a search engine that actually sends human traffic.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Microsoft funding bankrupt Live Search experiment with porn spam

If only this headline would be linkbait … of course it’s not sarcastic.

M$ PORN CASHRumors are out that Microsoft will launch a porn affiliate programm soon. The top secret code name for this project is “pornbucks”, but analysts say that it will be launched as “M$ SMUT CASH” next year or so.

Since Microsoft just can’t ship anything in time, and the usual delays aren’t communicated internally, their search dept. began to promote it to Webmasters this summer.

Surprisingly, Webmasters across the globe weren’t that excited to find promotinal messages from Live Search in their log files, so a somewhat confused MSN dude posted a lame excuse to a large Webmaster forum.

Meanwhile we found out that Microsoft Live Search does not only target the adult entertainment industry, they’re testing the waters with other money terms like travel or pharmaceutic products too.

Anytime soon the Live Search menu bar will be updated to something like this:
Live Search Porn Spam Menu

Here is the sad –but true– story of a search engine’s downfall.

A few months ago Microsoft Live Search discovered that x-rated referrer spam is a must-have technique in a sneaky smut peddlar’s marketing toolbox.

Since August 2007 a bogus Web robot follows Microsoft’s search engine crawler “MSNbot” to spam the referrer logs of all Web sites out there with URLs pointing to MSN search result pages featuring porn.

Read your referrer logs and you’ll find spam from Microsoft too, but perhaps they peeve you with viagra spam, offer you unwanted but cheap payday loans, or try to enlarge your penis. Of course they know every trick in the book on spam, so check for harmless catchwords too. Here is an example URL:
http://search.live.com/results.aspx?q= spammy-keyword &mrt=en-us&FORM=LIVSOP

Microsoft’s spam bot not only leaves bogus URLs in log files, hoping that Webmasters will click them on their referrer stats pages and maybe sign up for something like “M$ Porn Bucks” or so. It downloads and renders even adverts powered by their rival Google, lowering their CTR; obviously to make programs like AdSense less attractive im comparison with Microsoft’s own ads (sorry, no link love from here).

Let’s look at Microsoft’s misleading statement:

The traffic you are seeing is part of a quality check we run on selected pages. While we work on addressing your conerns, we would request that you do not actively block the IP addreses used by this quality check; blocking these IP addresses could prevent your site from being included in the Live Search index.

  • That’s not traffic, that’s bot activity: These hits come within seconds of being indexed by MSNBot. The pattern is like this: the page is requested by MSNBot (which is authenticated, so it’s genuine) and within a few seconds, the very same page is requested with a live.com search result URL as referer by the MSN spam bot faking a human visitor.
  • If that’s really a quality check to detect cloaking, that’s more than just lame. The IP addresses don’t change, the bogus bot uses a static user agent name, and there are other footprints which allow every cloaking script out there to serve this sneaky bot the exact same spider fodder that MSNbot got seconds before. This flawed technique might catch poor man’s cloaking every once in a while, but it can’t fool savvy search marketers.
  • The FUD “could prevent your site from being included in the Live Search index” is laughable, because in most niches MSN search traffic is not existent.

All major search engines, including MSN, promise that they obey the robots exclusion standard. Obeying robots.txt is the holy grail of search engine crawling. A search engine that ignores robots.txt and other normed crawler directives cannot be trusted. The crappy MSN bot not even bothers to read robots.txt, so there’s no chance to block it with standardized methods. Only IP blocking can keep it out, but then it still seems to download ads from Google’s AdSense servers by executing the JavaScript code that the MSN crawler gathered before (not obeying Google’s AdSense robots.txt as well).

This unethical spam bot downloading all images, external CSS and JS files, and whatnot also burns bandwidth. That’s plain theft.

Since this method cannot detect (most) cloaking, and the so called “search quality control bot” doesn’t stop visiting sites which obviously do not cloak, it is a sneaky marketing tool. Whether or not Microsoft Live Search tries to promote cyberspace porn and on-line viagra shops plays no role. Even spamming with safe-at-work keywords is evil. Do these assclowns really believe that such unethical activities will increase the usage of their tiny and pretty unpopular search engine? Of course they do, otherwise they would have shutted down the spam bot months ago.

Dear reader, please tell me: what do you think of a search engine that steals (bandwidth and AdSense revenue), lies, spams away, and is not clever enough to stop their criminal activities when they’re caught?

Recently a Live Search rep whined in an interview because so many robots.txt files out there block their crawler:

One thing that we noticed for example while mining our logs is that there are still a fair number of sites that specifically only allow Googlebot and do not allow MSNBot.

There’s a suitable answer, though. Update your robots.txt:

User-agent: MSNbot
Disallow: /



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Better don’t run a web server under Windows

IIS defaults can produce serious troubles with search engines. That’s a common problem and not even all .nhs.uk (UK Government National
Health Service) admins have spotted it. I’ve alerted the Whipps Cross University Hospital but can’t email all NHS sites suffering from IIS and lazy or uninformed webmasters. So here’s the fix:

Create a server without subdomain domain.nhs.uk, then go to the “Home Directory” tab and click the option “Redirection to a URL”. As “Redirect to” enter the destination, for example “http://www.domain.nhs.uk$S$Q”, without a slash after “.uk” because the path ($S placeholder) begins with a slash. The $Q placeholder represents the query string. Next check “Exact URL entered above” and “Permanent redirection for this resource”, and submit. Test the redirection with a suitable tool.

Now when a user enters a URL without the “www” prefix s/he gets the requested page from the canonical server name. Also search engine crawlers following non-canonical links like http://whippsx.nhs.uk/ will transmit the link love to the desired URL, and will index more pages instead of deleting them in their search indexes after a while because the server is not reachable. I’m not joking. Under some circumstances all or many www-URLs of pages referenced by relative links resolving to the non-existent server will get deleted in the search index after a couple of unsuccessfull attempts to fetch them without the www-prefix.

Hat tip to Robbo
Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Dear search engines, please bury the rel=nofollow-fiasko

The misuse of the rel=nofollow initiative is getting out of control. Invented to fight comment spam, nowadays it is applied to commercial links, biased editorial links, navigational links, links to worst enemies (funny example: Matt Cutts links to a SEO-Blackhat with rel=nofollow) and whatever else. Gazillions of publishers and site owners add it to their links for the wrong reasons, simply because they don’t understand its intention, its mechanism, and especially not the ongoing morphing of its semantics. Even professional webmasters and search engine experts have a hard time to follow the nofollow-beast semantically. As more its initial usage gets diluted, as more folks suspect search engines cook their secret sauce with indigestibly nofollow-ingredients.

Not only rel=nofollow wasn’t able to stop blog-spam-bots, it came with a build-in flaw: confusion.

Good news is that currently the nofollow-debate gets stoked again. Threadwatch hosts a thread titled Nofollow’s Historical Changes and Associated Hypocrisy, folks are ranting on the questionable Wikipedia decision to nofollow all outbound links, Google video folks manipulated the PageRank algo by plastering most of their links with rel=nofollow by mistake, and even Yahoo’s top gun Jeremy Zawodny is not that happy with the nofollow-debacle for a while now.

Say NO to NOFOLLOW - copyright jlh-design.comI say that it is possible to replace the unsuccessful nofollow-mechanism with an understandable and reasonable functionality to allow search engine crawler directives on link level. It can be done although there are shitloads of rel=nofollow links out there. Here is why, and how:

The value “nofollow” in the link’s REL attribute creates misunderstandings, recently even in the inventor’s company, because it is, hmmm, hapless.

In fact, back then it meant “passnoreputation” and nothing more. That is search engines shall follow those links, and they shall index the destination page, and they shall show those links in reversed citation results. They just must not pass any reputation or topical relevancy with that link.

There were micro formats better suitable to achieve the goal, for example Technorati’s votelinks, but unfortunately the united search geeks have chosen a value adapted from the robots exclusion standard, which is plain misleading because it has absolutely nothing to do with its (intended) core functionality.

I can think of cases where a real nofollow-directive for spiders on link level makes perfect sense. It could tell the spider not to fetch a particular link destination, even if the page’s robots tag says “follow”, for example printer friendly pages. I’d use an “ignore this link” directive for example in crawlable horizontal popup menus to avoid theme dilution when every page of a section (or site) links to every other page. Actually, there is more need for spider directives on HTML element level, not only in links, for example to tag templated and/or navigational page areas like with Google’s section targeting.

There is nothing wrong with a mechanism to neutralize links in user input. Just the value “nofollow” in the type-of-forward-relationship attribute is not suitable to label unchecked or not (yet) trusted links. If it is really necessary to adopt a well known value from the robots exclusion standard (and don’t misunderstand me, reusing familiar terms in the right context is a good idea in general), the “noindex” value would have been be a better choice (although not perfect). “Noindex” describes way better what happens in a SE ranking algo: it doesn’t index (in its technical meaning) a vote for the target. Period.

It is not too late to replace the rel=nofollow-fiasco with a better solution which could take care of some similar use cases too. Folks at Technorati, the W3C and whereever have done the initial work already, so it’s just a tiny task left: extending an existing norm to enable a reasonable granularity of crawler directives on link level, or better for HTML elements at all. Rel=nofollow would get deprecated, replaced by suitable and standardized values, and for a couple years the engines could interpret rel=nofollow in its primordial meaning.

Since the rel=nofollow thingy exists, it has confused gazillions of non-geeky site owners, publishers and editors on the net. Last year I’ve got a new client who added rel=nofollow to all his internal links because he saw nofollowed links on a popular and well ranked site in his industry and thought rel=nofollow could perhaps improve his own rankings. That’s just one example of many where I’ve seen intended as well as mistakenly misuse of the way too geeky nofollow-value. As Jill Whalen points out to Matt Cutts, that’s just the beginning of net-wide nofollow-insane.

Ok, we’ve learned that the “nofollow” value is a notional monster, so can we please have it removed from the search engine algos in favour of a well thought out solution, preferably asap? Thanks.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2