Get IE9 today! Free download - start surfing fast and safe, instantly!

Days before Microsoft is going to release their new-ish Internet Explorer 9 (IE9), you can get your free copy of a state-of-the-art Web browser here:


The Internet says thanks to James Groome from London, UK, for an amazingly short IE9 download URI: Of course, you can still download the best and fastest Web browser out there from its original, longish, download URI.

Go get your new Web browser today, to start surfing safe and fast, instantly. Never worry about Web browser updates any more, because your new Web browser updates itself when neccessary.

This page is best viewed with Chrome or Safari. You may have spotted that not really leads to something like Internet Explorer 9. #GeekHumor

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

Get the Google cop outta my shopping cart!

So now Google ranks my shopping SERPs by its opinion of customer service quality?

Do not want!

Google CopI’m perfectly satisfied with shopping search results ordered by relevancy and (link) popularity. I do not want Google to decide where I have to buy my stuff, just because an assclown treating his customers like shit got coverage in the NYT.

If I’m old enough to have free access to the Internet and a credit card, then I’m capable of checking out a Web shop before I buy. I don’t need to be extremely Web savvy to fire up a search for [XXX sucks] before I click on “add to cart”. Hey, even my 13yo son applies way more sophisticated methods. Google cannot and never will be able to create anything more reliable than my build-in bullshit-detector.

Of course, it’s Google’s search engine. Matt’s right when he states “two different court cases have held that our search results are our opinion and protected under 1st amendment”. The problem is, sometimes I disagree with Google’s opinions.

Expressing an opinion about a site’s customer service by not showing it on the SERPs that more than 60% of this planet’s population use to find stuff is a slippery slope. A very slippery slope. It means that for example I cannot buy a pair of shoes for $40 (time of delivery 10 days, free shipping), because Google only points me to shops that sell the same pair of shoes for $100 (plus fedex overnight fees). Since when did Google’s mission statement change to “organize the world’s shopping expeditions”? Maybe I didn’t get an important memo.

Not only that. Google is well known for producing heavy collateral damage when applying changes to commercial rankings. A simple software glitch could peculate the best deals on the Web, or ruin totally legit businesses suffering from fraudulent review spam spread by their competitors.

And finally, cross your heart, do you trust a search engine that far? Do you really expect Google to sort out the Web for you, not even asking how much of Google’s opinion you want to get applied when it comes to judging what appears on your personal search results? Not that Google will ever implement a slider where you can tell how much of your common sense you’re willing to invest vs. Google’s choice of goog, er, good customer service …

Well, I could live with a warning put as an anchor text like “show what boatloads of ripped-off customers told Googlebot about XXX” or so, but I do want to get the whole picture, uncensored.

End of rant.

Lets look at the algo change from a technical point of view:

Credit where credit is due, developing and deploying a filter that catches a fraudulent Web shop “gaming Google” out of billions of indexed pages within a few days is not trivial (what translates to ‘awesome job’ coming from a geek).

It’s not so astonishing that this filter also picked 100 clones of the jerk mentioned by the New York Times for Google’s newish shitlist. Of course it didn’t catch just another fishy site, same SOP, owned by the same guy. That makes it kinda hand job, just executed by an algorithm. Explained in my Twitter stream: “@DaveWiner I read that Google post as ‘We realize there is a problem that we can’t solve yet. We have a short term fix for this jerk.’”, or “so yeah, I stand by my statement: it’s a hand job to manipulate the press and keep the stock from moving.”

And that’s good news, at least for today’s shape of Google’s Web search. It means that Google does not yet rank the results of each and every search with commerial intent by Google’s rough estimate of the shop’s customer service quality.

Google’s ranking is still based on link popularity, so negative links are still a vote of confidence.

There are only so many not-totally-weak signals out there, and Google’s not to blame for heavily relying on one of the better ones: links. I don’t believe they’ll lower the importance of links anytime soon, at least not significantly. And why should they? I surely don’t want that. And I doubt it makes much sense, plus I doubt that Google can do that.

As for the meaning of links, well, I just hope that Google doesn’t try to guess intentions out of plain A elements and their context. That’s a must-fail project. I’ve developed some faith in the sanity and smartness of Google’s engineers over the years. I hope they won’t disappoint me now.

Of course one can express a link’s intention in a machine-readable way. For example with a microformat like VoteLinks. Unfortunately, nobody cares enough to actually make use of it.

Google’s very own misconception, er, microformat rel-nofollow, is even less reliable. Imagine a dead tired and overworked algo in the cellar of building 43 trying to figure out whether a particular link’s rel=”nofollow” was set

  • to mark a paid link
  • because the SEO next door said PageRank® hoarding is cool
  • because at the webmaster’s preferred hangout nofollow’ing links was the topic of week 53/2005
  • because the webmaster bought Google’s FUD and castrates all links except those leading to just in case Google could penalize him for a badass one
  • to express that the link’s destination is a 404 page, so that the “PageRank™ leak”, er, link isn’t worth any link juice
  • because the author thankfully links back to a leading Web resource in his industry that linked to him as a honest recommendation, but is afraid of a reciprocal link penalty
  • because the author agrees with the linked page’s message, but doesn’t like the foul language used over there
  • because the author disagrees with the discussed, and therefore linked, destination page
  • just because some crappy CMS condomizes every 3rd link automatically for reasons not known to man

Well, not even all Googlers like it. In fact, some teams decided to ignore it because of its weakness and widespread abuse.

The above said is only valid for links embedded in markup that allows machine-readable tagging of links. Even if such tags would be reliable, they don’t cover all references, aka hyperlinks, on the Web. Think of PDF, Flash, some client sided scripting, … and what about the gazillions of un-tagged links out there, put by folks who never heard of microformats?

Also, nobody links out anymore. We paste URIs into tiny textareas limited to 140 characters that don’t have room for meta data like microformats at all. And since Bing as well as Google use links in tweets for ranking purposes (Web search and news), how the fuck could even a smartass algo decide whether a tweet’s link points to crap or gold? Go figure.

And please don’t get me started on a possible use of sentiment analysis in rankings. To summarize, “FAIL” is printed in big bold letters all over Google’s (or any search engine for that matter) approach to rank search results by the quality of customer service based on signals scraped from unstructured data crawled on the Interwebs. So please, for the sake of my thin wallet, DEAR GOOGLE DON’T EVEN TRY IT! Thanks in advance.

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

Buy Free VIAGRA® Online! No Shipping Costs!

Your search for prescription free Viagra® ends here.

Original VIAGRA® pills ©

Pfizer just released the amazingly easy-to-understand Ultimate VIAGRA® DIY Guide (PDF, 30 illustrated pages). Look at the simple molecule on page one, cloning it is a breeze. Go brew your own! With a little help from your local alchemist, er, pharmacist, you can make even pills and paint them blue. Next get an empty packet and glue, then print out six copies of the image above. As a seasoned DIY professional you’ll certainly manage to fake Pfizer’s pill box. Congrats. You’re awesome.

As for the promise of “no shipping costs”: Well, I don’t ship Viagra®, so it wouldn’t be fair to charge you with UPS costs * 7.5 (I’m such an angel sometimes!), don’t you agree?

By the way, if the above said sounds too complicated, there’s a shortcut: click on the image.


Barry’s post about Free Viagra® Links inspired this pamphlet. Google’s [buy viagra online] SERP still is a mess. Obviously, Google doesn’t care about link spam influencing search results for money terms. Even low-life links can boost crap to the first SERP.

About time to change that!

Since Google doesn’t tidy up its Viagra® SERPs, lets help ourselves to the search quality we deserve. Most probably you’ve spotted that this pamphlet was created to funnel (search) traffic to Pfizer’s Viagra® outlet. Therefore, if you’re into search quality, put up some links to this post. I promise there’s no better way magic to create clean Viagra® SERPs at Google.

Dear reader, please copy the HTML code above and paste it onto your signatures, blog posts, social media profiles … everywhere. If you keep your links up forever, Google’s SERPs will remain useful until the Internet vanishes.

Disclaimer: No, I can’t even spiel ‘linkbait’. And no, I don’t promise not to replace this page with a sales pitch for some fake-ish Viagra®-clone once your link juice gained yours truly a top spot on said SERP. D’oh!

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

sway(”Google Webmaster Happiness Index”, $numStars, $rant);

Rumors about GWHI are floating around for a while, but not even insiders were able to figure out the formula. As a matter of fact, not a single webmaster outside the Googleplex has ever seen it. I assume Barry’s guess is quite accurate: GWHI-meter

Anyway, I don’t care what it is, or how it works, as long as I can automate it. At first I ran a few tests by retweeting Google related rants, and finally I developed sway(string destination, decimal numStars, string rant). For a while now I’m brain-dumping my rants to Google with a cron job. I had to kill the process a few times until I figured out that $numStars = -5 invokes a multiply by -1 error, but since Google has fixed this bug it runs smoothly, nine to five.

Yesterday I learned that Google launched a manual variant of my method for you mere mortals. I’m excited to share it: HotPot. Nope, it’s not a typo. Hot pot, as in bong. Officially addictive (source).

HotPot’s RTFM

Login with your most disposable Google account, then load with your Web browser (API coming soon, so I was told, hence feel free to poll for an HTTP response code != 503).

The landing page’s search box explains itself: “Enter a category near a familiar neighborhood and city to start rating places you know. Ex. [restaurants Mountain View, CA]”. HotPot search boxOf course localization is in place and working fine (you can change your current address in your Google Profile at any time by providing Checkout with another credit card).

As a webmaster eager to submit GWHI ratings, you’re not interested in over-priced food near the Googleplex, so you overwrite the default category: HotPot search for a search engine in Mountain View, CA

HotPot rating box for a search engine called Google in Mountain View, CAPress the Search button.

On the result page you’ll spot a box featuring Google, with a nice picture of the Googleplex in Mountain View. To convince you that indeed you’ve found the right place to drop your rants, “Google” is written in bold letters all over the building.

To its left, Google HotPot provides tips like

Get smarter SERPs.

Reading your mind we’ve figured out that a particular SERP ranking has pissed you off. You know, rankings can turn out good and bad, even yours. With you rating our rankings, we learn a bit more about your tastes, so you’ll get better SERPs the next time you search.

Next you click on any gray star at the bottom, and magically the promotional image turns into a text area.

HotPot review of a search engine called Google in Mountain View, CA Now tell the almighty Google why your pathetic site deserves better rankings than the popular brands with deep pockets you’re competiting with on the Interwebs.

Don’t make the mistake to mention that you’re cheaper. Google will conclude that goes for your information architecture, crawlability, usability, image resolution and content quality, too. Better mimick an elitist specialist of all professions or so, and sell your stuff as swiss army knife.

Then press the Publish button, and revisit your SERP, again and again.

You’ll be quite astonished.

Google’s webmaster relations team will be quite happy.

I mean, can you think of a better way to turn yourself in with a selfish spam report as an ajax’ed Web form that even comes with stars?

Google’s HotPot is pretty cool, don’t you agree?


spying at:

1600 Amphitheatre Parkway

Mountain View,


Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

How to spam the hell out of Google’s new source attribution meta elements

The moment you’ve read Google’s announcement and Matt’s question “What about spam?” you concluded “spamming it is a breeze”, right? You’re not alone.

Before we discuss how to abuse it, it might be a good idea to define it within its context, ok?


First of all, Google announced these meta tags on the official Google News blog  for a reason. So when you plan to abuse it with your countless MFA proxies of Yahoo Answers, you most probably jumped on the wrong band wagon. Google supports the meta elements below in Google News only.


The first new indexer hint is syndication-source. It’s meant to tell Google the permalink of a particular news story, hence the author and all the folks spreading the word are asked to use it to point to the one –and only one– URI considered the source:

<meta name="syndication-source" content="" />

The meta element above is for instances of the story served from

Don’t confuse it with the cross-domain rel-canonical link element. It’s not about canning duplicate content, it marks a particular story, regardless whether it’s somewhat rewritten or just reprinted with a different headline. It tells Google News to use the original URI when the story can be crawled from different URIs on the author’s server, and when syndicated stories on other servers are so similar to the initial piece that Google News prefers to use the original (the latter is my educated guess).


The second new indexer hint is original-source. It’s meant to tell Google the origin of the news itself, so the author/enterprise digging it out of the mud, as well as all the folks using it later on, are asked to declare who broke the story:

<meta name="original-source" content="" />

Say we’ve got two or more related news, like “Google fell from Mars” by and “Google landed in Mountain View” by, it makes sense for to publish a piece like “Google fell from Mars and landed in Mountain View”. Because is a serious newspaper, they credit their sources not only with a mention or even embedded links, they do it machine-readable, too:

<meta name="original-source" content="" />
<meta name="original-source" content="" />

It’s a matter of course that both and provide such an original-source meta element on their pages, in addition to the syndication-source meta element, both pointing to their very own coverage.

If a journalist grabbed his breaking news from a secondary source telling “CNN reported five minutes ago that Google’s mothership started from Venus, and the LA Times spotted it crashing on Jupiter”, he can’t be bothered with looking at the markup and locating those meta elements in the head section, he has a deadline for his piece “Why Web search left Planet Earth”. It’s just fine with Google News when he puts

<meta name="original-source" content="" />
<meta name="original-source" content="" />


As always, the most interesting stuff is hidden on a help page:

At this time, Google News will not make any changes to article ranking based on this tags.

If we detect that a site is using these metatags inaccurately (e.g., only to promote their own content), we’ll reduce the importance we assign to their metatags. And, as always, we reserve the right to remove a site from Google News if, for example, we determine it to be spammy.

As with any other publisher-supplied metadata, we will be taking steps to ensure the integrity and reliability of this information.

It’s a field test

We think it is a promising method for detecting originality among a diverse set of news articles, but we won’t know for sure until we’ve seen a lot of data. By releasing this tag, we’re asking publishers to participate in an experiment that we hope will improve Google News and, ultimately, online journalism. […] Eventually, if we believe they prove useful, these tags will be incorporated among the many other signals that go into ranking and grouping articles in Google News. For now, syndication-source will only be used to distinguish among groups of duplicate identical articles, while original-source is only being studied and will not factor into ranking. [emphasis mine]

Spam potential

Well, we do know that Google Web search has a spam problem, IOW even a few so-1999-webspam-tactics still work to some extent. So we tend to classify a vague threat like “If we find sites abusing these tags, we may […] remove [those] from Google News entirely” as FUD, and spam away. Common sense and experience tells us that a smart marketer will make money from everything spammable.

But: we’re not talking about Web search. Google News is a clearly laid out environment. There are only so many sites covered by Google News. Even if Google wouldn’t be able to develop algos analyzing all source attribution attributes out there, they do have the resources to identify abuse using manpower alone. Most probably they will do both.

They clearly told us that they will compare those meta data to other signals. And that’s not only very weak indicators like “timestamp first crawled” or “first heard of via pubsubhubbub”. It’s not that hard to isolate particular news, gather each occurrence as well as source mentions within, and arrange those on a time line with clickable links for QC folks who most certainly will identify the actual source. Even a few spot tests daily will soon reveal the sites whose source attribution meta tags are questionable, or even spammy.

If you’re still not convinced, fair enough. Go spam away. Once you’ve lost your entry on the whitelist, your free traffic from Google News, as well as from news-one-box results on conventional SERPs, is toast.

Last but not least, a fair warning

Now, if you still want to use source attribution meta elements on your non-newsworthy MFA sites to claim owership of your scraped content, feel free to do so. Most probably Matt’s team will appreciate just another “I’m spamming Google” signal.

Not that reprinting scraped content is considered shady any more: even a former president does it shamelessly. It’s just the almighty Google in all of its evilness that penalizes you for considering all on-line content public domain.

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

While doing evil, reluctantly: Size, er trust matters.

These Interwebs are a mess. One can’t trust anyone. Especially not link drops, since Twitter decided to break the Web by raping all of its URIs. Twitter’s sloppy URI gangbang became the Web’s biggest and most disgusting clusterfuck in no time.

Evil URI shortenersI still can’t agree to the friggin’ “N” in SNAFU when it comes to URI shortening. Every time I’m doing evil myself at sites like, I’m literally vomiting all over the ‘net — in Swahili, er base36 pidgin.

Besides the fact that each and every shortened URI manifests a felonious design flaw, the major concern is that most –if not all– URI shorteners will die before the last URI they’ve shortened is irrevocable dead. And yes, shit happens all day long — RIP et al.

Letting shit happen is by no means a dogma. We shouldn’t throw away common sense and best practices when it comes to URI management, which, besides avoiding as many redirects as possible, includes risk management:

What if the great chief of Libya all of a sudden decides that gazillions of redirecting punters to their desired smut aren’t exactly compatible to the Qur’an? All your URIs will be defunct over night, and because you rely on traffic from places you’ve spammed with your shortened URIs, you’ll be forced to downgrade your expensive hosting plan to a shitty freehost account that displays huge Al-Quaeda or even Weight-Watchers banners above the fold of your pathetic Web pages.

In related news, even the almighty Google just pestered the Interwebs with just another URI shortener’s website: It promises stability, security, and speed.

Well, at the day it launched, I broke it with recursive chains of redirects, and meanwhile creative folks like Dave Naylor perhaps wrote a guide on “hacking for fun and profit”. #abuse

Of course there are bugs in a brand new product. But Google is a company iterating code way faster than most Internet companies, and due to their huge user base and continuous testing under operating conditions they’re aware of most of their bugs. They’ll fix them eventually, and soon –as promised– will be “the stablest, most secure, and fastest URL shortener on the Web”.

So, just based on the size of Google’s infrastructure, it seems is going to be the most reliable one out of all evil URI shorteners. Kinda queen of all royal PITAs. But is this a good enough reason to actually use Not quite enough, yet.

Go ask a Googler “Can you guarantee that will outlive the Internet?”. I got answers like “I agree with your concern. I thought about it myself. But I’m confident Google will try its very best to preserve that”. From an engineer’s perspective, all of them agree with my statement “URI shortening totally sucks ass”. But IRL the Interwebs are flooded with crappy shortURLs, and that’s not acceptable. They figured that URI shortening can’t be eliminated, so it had to be enhanced by a more reliable procedure. Hence bright folks like Muthu Muthusrinivasan, Devin Mullins, Ben D’Angelo et al created, with mixed feelings.

That’s why I recommend the lesser evil. Not because Google is huge, has the better infrastructure, picked a better domain, and the whole shebang. I do trust these software engineers, because they think and act like me. Plus, they’ve got the resources.

I’m going
I’ll dump etc.

Fineprint: However, I won’t throw away my very own URI shortener, because this evil piece of crap can do things the mainstream URI shorteners –including– are still dreaming of, like preventing search angine crawlers from spotting affiliate links and such stuff. Shortening links alone doesn’t equal cloaking fishy links professionally.

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

Is Google a search engine based in Mountain View, CA (California, USA)?

Is Google a search engine? Honest answer: Dunno. Google might be a search engine. It could be a fridge, too. Or a yet undiscovered dinosaur, a scary purple man-eater, a prescription drug, or my mom’s worst nightmare.

According to the search engine pre-installed in my browser, “Dogpile” is a search engine, and “Bing”, “Altavista”, even “Wikipedia”. Also a tool called “Google Custom Search” and a popular blog titled “Search Engine Land” are considered search engines, besides obscure Web sites like “Ask”, “DuckDuckGo”, “MetaCrawler” and “Yahoo”. Sorry, I can’t work with these suggestions.

So probably I need to perform a localized search to get an answer:

Is Google a search engine based in Mountain View, CA?

0.19 seconds later my browser’s search facility delivers the desired answer, instantly in near lightning speed. The first result for [Is Google a search engine based in Mountain View, CA] lists an entity outing itself as “Google Mountain View”, the second result is “Googleplex”.

Wait … that doesn’t really answer my question. First, the search result page says “near Mountain View”, but I’ve asked for a search engine “in Mountain View”. Second, it doesn’t tell whether Google, or Googleplex for that matter, is a search engine or a Swiss army knife. Third, a suitable answer would be either “yes” or “no”, but certainly not “maybe something that matches a term or two found in your search query could be relevant, hence I throw enough gibberish –like 65635 bytes of bloated HTML/JS code and a map– your way to keep you quiet for a while”.

I’m depressed.

But I don’t give up that easily. The office next door belongs to a detective agency. The detective in charge is willing to provide a little neighborly help, so I send him over to Mountain View to investigate that dubious “Googleplex”. The guy appears to be smart, so maybe he can reveal whether this location hosts a search engine or not.

Indeed, he’s kinda genius. He managed to interview a GoogleGuy working in building 43, who tells that #1 rankings for [search engine] can’t be guaranteed, but #1 rankings for long tail phrases like [Google is a search engine based in Mountain View, California, USA] can be achived by nearly everyone. My private eye taped the conversation with a hidden camera and submitted it to America’s Funniest Home Videos:

One question remains: Why can’t a guy that knowledgable make it happen that his employer appears as first search result for, well, [search engine], or at least [search engine based in Mountain View, California]?? Go figure …

Sorry Matt, couldn’t resist. ;-)


spying at:

1600 Amphitheatre Parkway

Mountain View,


Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments


Mantra: There’s no such thing as wisdom of the crowd. Repeat. There’s no such thing as a wisdom of the crowd! You’ve got a brain of your own for a reason.preloading

THINK!There’s a huge difference between Thomas J. Watson’s campaign in the 1920s, which made IBM –as a company gathering intelligent individuals– think big and therefore get big at the end of the day, and the votable daily insane pestering social media, forums, blogs, and whatnot, that we willingly and thoughtlessly consume in today’s information ghetto. The difference is, that nowadays the crowd delegates their thinking to a few well paid ‘early adopters’, bullshitters/prophets, and other conmen who dominate the Interwebs just because they’re loud enough.

In fact, all the hypes celebrated by the dumb crowds distract and mislead you on a daily basis. As a webmaster you really shouldn’t care about ‘latest discoveries’ like LDA and ADL, or search engine FUD reiterated on webmaster hangouts as advice that ‘answers any question’, for that matter.

Not that you can’t get valuable advice out of search engine webmaster guidelines at all. The opposite is true, but you need to read the source, and judge yourself based on your skills and your experience, applying common sense.

Also, there’s other good webmastering advice out there, if you’re willing to seek(needle, haystack=wget(’|sem|webdev|webdesign|webmastering|internet-marketing&num=n‘)). Don’t. Rely on yourself, and your capability to interpret facts, not on speculation spread by ‘authoritative’ sources.

It’s so much easier to join a huge community or two, and to believe/implement/adapt what’s ‘hot’, or what’s repeated often, respectively. Actually, that’s a crappy approach, because the very few small communities that openly discuss things that matter, are out of reach for the average webmaster, chatting and networking protected by /var/inner-circle/private/.htpasswd.

Here are the components of a public webmaster/SEO/IM community, listed by revenues in ascending order (that’s -1 before zero and 1), what equals alleged trustworthiness/importance in descending order:

  • Many fanboys (m) and groupies (f) who don’t have a clue, but vote up everything what an entity listed below suggests. They will even rave speak out at other alien places, if their idols (see below) get outed for bullshitting anywhere. They go by the title of junior members.
  • A few semi-professional whores who operate blogs/forums/aff-programs theirselves, and manage to steal a tiny portion of the floating popularity to feed their pathetic outlets. Those are considered senior members.
  • A handful of shiny rockstars who silenty suck up to their owner master (see below). They may or may not participate monetarily, and have the power of moderators.
  • One single guy who laughs all the way to the bank.

Looked at in full daylight: when you join a crowd you become cannon fodder, and your financial misery is considered collateral damage. Lurking (silently listening to crowds) is not exactly cheaper, and certainly doesn’t make you an unsung hero, because you’ll totally share the crowd’s misery. Your balance sheet doesn’t lie, usually.

Reboot your brain before you jump on popular band wagons. Don’t listen to advice that’s freely available, not even mine (WTF, you know what I mean). If somebody discusses ethics (hat colors), then run for your life, because ethics will kill your revenue. When it comes to SEO, then it helps to evaluate (search engine/any) advice under the premise “what would I do, and what could I achive (technically), if I’d run this SE?”.

It’s all about you. Don’t care about the well beings of search engines that suffer from WebSpam, or the healthiness of affiliate programs that make shitloads of green out of it, but tell you ‘thou shalt not spam’ because they sneakily dominate your SERPs with their own graffity. WebSpam is what gets you banned, everything else just makes you money. Test for yourself, and don’t take advice without proof that you can easily replicate on your very own servers.

Do not risk your earnings –that is your existence!– with strategies and tactics you can’t handle on the long haul, just because some selfish moron tells you so.

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

WTF have Google, Bing, and Yahoo cooking?

Folks, I’ve got good news. As a matter of fact, they’re so good that they will revolutionize SEO. A little bird told me, that the major search engines secretly teamed up to solve the problem of context and meaning as a ranking factor.

They’ve invented a new Web standard that allows content producers to steer search engine ranking algos. Its code name is ADL, probably standing for Aided Derivative Latch, a smart technology based on the groundwork of addressing tidbits of information developed by Hollerith and Neumann decades ago.

According to my sources, ADL will be launched next month at SMX East in New York City. In order to get you guys primed in a timely manner, here I’m going to leak the specs:

WTF - The official SEO standard, supported by Google, Yahoo & Bing

Word Targeting Funnel (WTF) is a set of indexer directives that get applied to Web resources as meta data. WTF comes with a few subsets for special use cases, details below. Here’s an example:

<meta name="WTF" content="document context" href="" />

This directive tells search engines, that the content of the page is closely related to the resource supplied in the META element’s HREF attribute.

As you’ve certainly noticed, you can target a specific SERP, too. That’s somewhat complicated, because the engineers couldn’t agree which search engine should define a document’s search query context. Fortunately, they finally found this compromise:

<meta name="WTF" content="document context" href=" || ||" />

As far as I know, this will even work if you change the order of URIs. That is, if you’re a Bing fanboy, you can mention Bing before Google and Yahoo.

A more practical example, taken from an affiliate’s sales pitch for viagra that participated in the BETA test, leads us to the first subset:

Subset WTFm — Word Targeting Funnel for medical terms

<meta name="WTF" content="document context" href="" />

This directive will convince search engines that the offered product indeed is not a clone like Cialis.

Subset WTFa — Word Targeting Funnel for acronyms

<meta name="WTFa" content="WTF" href="" />

When a Web resource contains the acronym “WTF”, search engines will link it to the World Taekwondo Federation, not to Your Ranting and Debating Resource at

Subset WTFo — Word Targeting Funnel for offensive language

<meta name="WTFo" content="meaning of terms" href="" />

If a search engine doesn’t know the meaning of terms I really can’t quote here, it will lookup the Internet Slang Directory. You can define alternatives, though:

<meta name="WTFo" content="alternate meaning of terms" href="" />

WTF, even more?

Of course we’ve got more subsets, like WTFi for instant searches. Because I appreciate unfair advantages, I won’t reveal more. Just one more goody: it works for PDF, Flash content and heavily ajax’ed stuff, too.

This is the very first newish indexer directive that search engines introduce with support for both META elements and HTTP headers as well. Like with the X-Robots-Tag, you can use an X-WTF-Tag HTTP header:
X-WTF-Tag: Name: WTFb, Content: SEO Bullshit, Href:



As for the little bird, well, that’s a lie. Sorry. There’s no such bird. It’s bugs I left last time I visited Google’s labs:
<meta name="WTF" content="bug,bugs,bird,birds" href="" />

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

OMFG - Google sends porn punters to my website …

In todays GWC doctor’s office, the webmaster of an innocent orphanage website asks Google’s Matt Cutts:

[My site] is showing up for searches on ‘girls in bathrooms’ because they have an article about renovating the girls bathroom! What do you think of the idea if a negative keyword meta tag to block irrelevant searches? [sic!]

Well, we don’t know what the friendly guy from Google recommends …

… but my dear readers do know that my bullshit detector, faced with such a moronic idea, shouts out in agony:

There’s no such thing as bad traffic, just weak monetizing!

Ok, Ok, Ok … every now and then each and every webmaster out there suffers from misleaded search engine ranking algos, that send shitloads of totally unrelated search traffic. For example, when you search for [how to fuck a click], you won’t expect that Google considers this geeky pamphlet the very best search result. Of course Google should’ve detected your NSFW-typo. Shit happens. Deal with it.

On the other hand, search traffic is free, so there’s no valid reason to complain. Instead of asking Google for a minus-keyword REP directive, one should think of clever ways to monetize unrelated traffic without wasting bandwidth.

You want to monetize irrelevant traffic from searches for smut in a way that nobody can associate your site with porn. That’s doable. Here’s how it works:

Make risk-free beer money from porn traffic with a non-adult site

Copy those slimy phrases from your keyword stats and paste them into Google’s search box. Once you find an adult site that seems to match the smut surfer’s needs better than your site, click on the search result, and on the landing page search for a “webmasters” link that points to their affiliate program. Sign up and save your customized affiliate link.

Next add some PHP code to your scripts. Make absolutely sure it gets executed before you output any other content, even whitespace:

<?php  Show all code

$betterMatch = getOffsiteUri();
if ($betterMatch) {
header("HTTP/1.1 307 Here's your smut", TRUE, 307);
header("Location: $betterMatch");
Refine the simplified code above. Use a database table to store the mappings …

Now a surfer coming from a SERP like

will get redirected to

You’re using a 307 redirect because it’s not cached by a user agent, so that when you later on find a porn site that converts your traffic better, you can redirect visitors to another URI.

As you probably know, search engines don’t approve duplicate content. Hence it wouldn’t be a bright idea to put up x-rated stuff (all smut is duplicate content by design) onto your site to fulfil the misleaded searcher’s needs.

Of course you can use the technique outlined above to protect searchers from landing on your contact/privacy page, too, when in fact your signup page is their desired destination.

Shiny whitehat disclaimer

If you’re afraid of the possibility that the allmighty Google might punish you for your well meant attempt to fix it’s bugs, relax.

A search engine misinterpreting your content so badly, failed miserably. Your bugfix actually improves their search quality. Search engines can’t force you to report such flaws, they just kindly ask for voluntary feedback.

If search engines dislike smart websites that find related content on the Interwebs in case the search engine delivers shitty search results, they can act themselves. Instead of penalizing webmasters that react to flaws in their algos, they’re well advised to adjust their scoring. I mean, if they stop sending smut traffic to non-porn sites, their users don’t get redirected any longer. It’s that simple.

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29  Next Page »