Bizarre facettes of war in social media

As a matter of fact, wars happen in social media, too. I don’t mean flame wars. I don’t refer to Arab dictators who, closely following the #ArabTyrantManual, during uprisings shut down Facebook, Twitter, or even the whole friggin Interwebs. I admit, those scumbags are somewhat creative. For example Syria’s junior dictator Bashar al-Assad, wo launched a huge amount of hashtag spambots diluting every piece of information leaking out from cyber activists, while reforming his people with T-72 shellings and mashine gun live rounds. With a little help from a fellow assclown based in Iran, he even managed to jam sat phones, cutting off the opposition’s lifeline to YouTube.

So when even -alleged- ‘third world’ autocrats utilize high sophisticated techniques gaming social media in their war on their own people, we can safely assume that there’s way more interesting stuff to know about the role of social media in today’s wars. You’ve read the headlines announcing cyber squads and such. Of course that info was outdated for decades before it hit the mainstream press. Also, the average (that equals IT-wise clueless) journalist blathers about DoS attacks and such, usually ignoring the more subtle aspects of cyber war. I’m not exactly a fan of rehashed news, so I refuse to discuss the obvious.

Recently, I’ve stumbled upon a pretty sneaky cyber war tactic. Well thought out, although I can’t tell how effective it actually is. The setup is kinda minimalistic: one Facebook account, and a few hundred (Ok, as of today that’s 1.6k) blog comments written by PsyWarriors:

In North Africa, where peaceful Libyans turned freedom fighters are struggling in a bloody conflict with a ruthless regime that performs atrocities on a daily basis, NATO somewhat acts as the ‘Free Libyan Air Force’, officially just enforcing the UNSC resolution #1973. Nothing wrong with that, since -despite some Gaddafi troops defected to the opposition- the so-called ‘rebels’ are civilians defending themselves, their families, neighbors, and even countless foreigners who weren’t able to flee before Gaddafi’s henchmen crawled all over the country in their brutal war on Libya’s population.

Herein lies the problem. We’ve epic amateurs barely able to handle an AK 47 on the ground, and professionals in the air. Both fighting the mad dog’s professional forces without direct lines of communication to each other. The rag-tag freedom fighters lacked structure, command, communication, experience, strategy and everything with regard to warfare. After the initial strikes by American, British and French armed forces, NATO joined the battlefield with a plan. Its step by step execution wasn’t exactly compatible with the high expectations of the then still amateurish freedom fighters, who even suffered from occasional friendly fire after carelessly celebrating with AA tracer fire, and cruising through the desert in seized tanks, towards liberated towns.

Of course the tourists carrying high sophisticated gadgets in their huge olive green bags, brought in via tour operator helicopters from their shiny gray yachts sailing near the Libyan coastlines, sorted out some of those misunderstandings. But since the Libyan freedom fighters totally lacked a chain of command, it didn’t help much that the few savvy leaders who actually talked to these tourists got enlightened, because the rag-tag troops consisting of untrained citizens chaotically advancing and retreating in the desert were out of their reach. Qatari military advisers on the ground, helping Libyan citizens carrying seized weapons get into shape, as well as very few consultants and military advisers from UK, France, and Italy, who arrived later on, had just started to train freedom fighters.

Also, the message had to be carried out to the Libyan people, and to Libyans in the diaspora as well, without revealing too much sensitive info that Gaddafi’s loyalists could find interesting. All that with most of the recepients on the ground cutted off from all their information channels besides Libya State TV and few other satellite channels, because cell phones and ISPs were jammed by the government, land lines were insecure … a dilemma. The Transitional National Council (NTC) in Benghazi was the sole institution that was able to reach out to the people inside Libya.

Al Jazeera’s Libya Live Blog (URI changes often, so please click through from the index page) was heavily trafficked since the uprising began (on 17 February, 2011), attracting gazillions of page views and receiving thousands of comments daily. And here we introduce Gerhard Heinz, perhaps a former NVA pilot or not, who frequently updates the audience with strategical as well as tactical information, written in very plain English with a heavy east-german accent. Like: ‘a good tip for tank comanders in tripoli stay away from your tanks ,conkret in the air’ (refers to smart, that is GPS and laser guided, 660-pound concrete bombs used by coalition fighter jets to destroy tanks in residential areas without much collateral damage).

He delivers spot on reports of NATO sorties as well as clashes on the ground as they happen, alledgedly based on timely sat images, SIGINT, HUMINT and whatnot, long before they appear in the (western) press after NATO announcements. Most of his stuff gets confirmed by other sources later on. He even makes predictions that come true, and not all of those are easily guessable and likely to happen. He explains NATO tactics in layman terms, tells why NATO requested the freedom fighters must not advance towards Brega for weeks (to create a sneaky trap for an elite brigade and lots of reinforcements from Sirte), and so on. When NATO is dead sure that particular pro-gaddafi troops can’t communicate after air strikes on CCC infrastructure, so no warning can reach them in time, Gerhard Heinz addresses those, advising them to defect, or at least to run and hide quickly before ‘fast flying silver birds lose some eggs’ above their positions.

Obviously all that is insider knowledge, scraped from NATO and NTC/FF sources. Since NATO doesn’t act on this ‘leak’ they must be aware of, I’m jumping to the conclusion that Gerhard Heinz is a weapon of mass disinformation, and mass education as well. It’s not him alone, by the way, but he’s the most prominent case (Gerhard Heinz has a large fan club) I’ve spotted until now. He informs and educates Libyans hungry for every tiny bit of reliable info with regard to the conflict, scanning Al Jazeera’s website for updates 24/7, then spreading the word through all channels available, including social media.

I may be wrong in details, because I’m by no means an expert when it comes to all the military stuff. But I know that an organization like NATO has the capability to deal with sensitive information leaking out to the public domain for weeks. If it’s not happening on purpose, they just lost my respect.

I do think that this dude mixes in personal information that might be true, for example his military background. Also, his strong opinions (for example about a weak German government and its cowardly FM who cares more for his personal political affairs than for the Libyan people, and the widespread opposition to the official politics within the German armed forces) are believable. At least it sounds authentic and consistent throughout more than 1,600 blog comments. And that’s doable even by a PsyOps team, considering that Gerhard Heinz posts at times when he should sleep. He openly admits that he’s backed by staff gathering and processing the facts from various sources, but denies all ties to NATO.

So, maybe, I should leave it to that with the words of a blog commenter on Al Jazeera’s website, who said:

@Gerhard Heinz
You have earned a lot of rep. back for Germany, they really owe you some thanks for your work and dedication in this.
It would be interesting to have an article in german newspapers about what you did, when all this is over, and more of it can be told.
For now its kind of a mystery (at least to me), what a german is doing in the middle of all this, and how he can be so well informed. I am very curious to hear how you did it.
Lots of respect from me.

Just make sure, dear reader, that you keep your natural scepticism when you read -regardless where, and that includes the mainstream press as well as social media- about a war. There might be an aganda behind every sentence.



Share/bookmark this: del.icio.us • Google • ma.gnolia • Mixx • Netscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Get IE9 today! Free download - start surfing fast and safe, instantly!

Days before Microsoft is going to release their new-ish Internet Explorer 9 (IE9), you can get your free copy of a state-of-the-art Web browser here:

GET IE9

The Internet says thanks to James Groome from London, UK, for an amazingly short IE9 download URI: GetIE9.com. Of course, you can still download the best and fastest Web browser out there from its original, longish, download URI.

Go get your new Web browser today, to start surfing safe and fast, instantly. Never worry about Web browser updates any more, because your new Web browser updates itself when neccessary.

This page is best viewed with Chrome or Safari. You may have spotted that getie9.com not really leads to something like Internet Explorer 9. #GeekHumor



Share/bookmark this: del.icio.us • Google • ma.gnolia • Mixx • Netscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Get the Google cop outta my shopping cart!

So now Google ranks my shopping SERPs by its opinion of customer service quality?

Do not want!

Google CopI’m perfectly satisfied with shopping search results ordered by relevancy and (link) popularity. I do not want Google to decide where I have to buy my stuff, just because an assclown treating his customers like shit got coverage in the NYT.

If I’m old enough to have free access to the Internet and a credit card, then I’m capable of checking out a Web shop before I buy. I don’t need to be extremely Web savvy to fire up a search for [XXX sucks] before I click on “add to cart”. Hey, even my 13yo son applies way more sophisticated methods. Google cannot and never will be able to create anything more reliable than my build-in bullshit-detector.

Of course, it’s Google’s search engine. Matt’s right when he states “two different court cases have held that our search results are our opinion and protected under 1st amendment”. The problem is, sometimes I disagree with Google’s opinions.

Expressing an opinion about a site’s customer service by not showing it on the SERPs that more than 60% of this planet’s population use to find stuff is a slippery slope. A very slippery slope. It means that for example I cannot buy a pair of shoes for $40 (time of delivery 10 days, free shipping), because Google only points me to shops that sell the same pair of shoes for $100 (plus fedex overnight fees). Since when did Google’s mission statement change to “organize the world’s shopping expeditions”? Maybe I didn’t get an important memo.

Not only that. Google is well known for producing heavy collateral damage when applying changes to commercial rankings. A simple software glitch could peculate the best deals on the Web, or ruin totally legit businesses suffering from fraudulent review spam spread by their competitors.

And finally, cross your heart, do you trust a search engine that far? Do you really expect Google to sort out the Web for you, not even asking how much of Google’s opinion you want to get applied when it comes to judging what appears on your personal search results? Not that Google will ever implement a slider where you can tell how much of your common sense you’re willing to invest vs. Google’s choice of goog, er, good customer service …

Well, I could live with a warning put as an anchor text like “show what boatloads of ripped-off customers told Googlebot about XXX” or so, but I do want to get the whole picture, uncensored.

End of rant.

Lets look at the algo change from a technical point of view:

Credit where credit is due, developing and deploying a filter that catches a fraudulent Web shop “gaming Google” out of billions of indexed pages within a few days is not trivial (what translates to ‘awesome job’ coming from a geek).

It’s not so astonishing that this filter also picked 100 clones of the jerk mentioned by the New York Times for Google’s newish shitlist. Of course it didn’t catch just another fishy site, same SOP, owned by the same guy. That makes it kinda hand job, just executed by an algorithm. Explained in my Twitter stream: “@DaveWiner I read that Google post as ‘We realize there is a problem that we can’t solve yet. We have a short term fix for this jerk.’”, or “so yeah, I stand by my statement: it’s a hand job to manipulate the press and keep the stock from moving.”

And that’s good news, at least for today’s shape of Google’s Web search. It means that Google does not yet rank the results of each and every search with commerial intent by Google’s rough estimate of the shop’s customer service quality.

Google’s ranking is still based on link popularity, so negative links are still a vote of confidence.

There are only so many not-totally-weak signals out there, and Google’s not to blame for heavily relying on one of the better ones: links. I don’t believe they’ll lower the importance of links anytime soon, at least not significantly. And why should they? I surely don’t want that. And I doubt it makes much sense, plus I doubt that Google can do that.

As for the meaning of links, well, I just hope that Google doesn’t try to guess intentions out of plain A elements and their context. That’s a must-fail project. I’ve developed some faith in the sanity and smartness of Google’s engineers over the years. I hope they won’t disappoint me now.

Of course one can express a link’s intention in a machine-readable way. For example with a microformat like VoteLinks. Unfortunately, nobody cares enough to actually make use of it.

Google’s very own misconception, er, microformat rel-nofollow, is even less reliable. Imagine a dead tired and overworked algo in the cellar of building 43 trying to figure out whether a particular link’s rel=”nofollow” was set

  • to mark a paid link
  • because the SEO next door said PageRank® hoarding is cool
  • because at the webmaster’s preferred hangout nofollow’ing links was the topic of week 53/2005
  • because the webmaster bought Google’s FUD and castrates all links except those leading to google.com just in case Google could penalize him for a badass one
  • to express that the link’s destination is a 404 page, so that the “PageRank™ leak”, er, link isn’t worth any link juice
  • because the author thankfully links back to a leading Web resource in his industry that linked to him as a honest recommendation, but is afraid of a reciprocal link penalty
  • because the author agrees with the linked page’s message, but doesn’t like the foul language used over there
  • because the author disagrees with the discussed, and therefore linked, destination page
  • just because some crappy CMS condomizes every 3rd link automatically for reasons not known to man

Well, not even all Googlers like it. In fact, some teams decided to ignore it because of its weakness and widespread abuse.

The above said is only valid for links embedded in markup that allows machine-readable tagging of links. Even if such tags would be reliable, they don’t cover all references, aka hyperlinks, on the Web. Think of PDF, Flash, some client sided scripting, … and what about the gazillions of un-tagged links out there, put by folks who never heard of microformats?

Also, nobody links out anymore. We paste URIs into tiny textareas limited to 140 characters that don’t have room for meta data like microformats at all. And since Bing as well as Google use links in tweets for ranking purposes (Web search and news), how the fuck could even a smartass algo decide whether a tweet’s link points to crap or gold? Go figure.

And please don’t get me started on a possible use of sentiment analysis in rankings. To summarize, “FAIL” is printed in big bold letters all over Google’s (or any search engine for that matter) approach to rank search results by the quality of customer service based on signals scraped from unstructured data crawled on the Interwebs. So please, for the sake of my thin wallet, DEAR GOOGLE DON’T EVEN TRY IT! Thanks in advance.



Share/bookmark this: del.icio.us • Google • ma.gnolia • Mixx • Netscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Buy Free VIAGRA® Online! No Shipping Costs!

Your search for prescription free Viagra® ends here.

Original VIAGRA® pills © viagra.com

Pfizer just released the amazingly easy-to-understand Ultimate VIAGRA® DIY Guide (PDF, 30 illustrated pages). Look at the simple molecule on page one, cloning it is a breeze. Go brew your own! With a little help from your local alchemist, er, pharmacist, you can make even pills and paint them blue. Next get an empty packet and glue, then print out six copies of the image above. As a seasoned DIY professional you’ll certainly manage to fake Pfizer’s pill box. Congrats. You’re awesome.

As for the promise of “no shipping costs”: Well, I don’t ship Viagra®, so it wouldn’t be fair to charge you with UPS costs * 7.5 (I’m such an angel sometimes!), don’t you agree?

By the way, if the above said sounds too complicated, there’s a shortcut: click on the image.

Seriously

Barry’s post about Free Viagra® Links inspired this pamphlet. Google’s [buy viagra online] SERP still is a mess. Obviously, Google doesn’t care about link spam influencing search results for money terms. Even low-life links can boost crap to the first SERP.

About time to change that!

Since Google doesn’t tidy up its Viagra® SERPs, lets help ourselves to the search quality we deserve. Most probably you’ve spotted that this pamphlet was created to funnel (search) traffic to Pfizer’s Viagra® outlet. Therefore, if you’re into search quality, put up some links to this post. I promise there’s no better way magic to create clean Viagra® SERPs at Google.

Dear reader, please copy the HTML code above and paste it onto your signatures, blog posts, social media profiles … everywhere. If you keep your links up forever, Google’s SERPs will remain useful until the Internet vanishes.

Disclaimer: No, I can’t even spiel ‘linkbait’. And no, I don’t promise not to replace this page with a sales pitch for some fake-ish Viagra®-clone once your link juice gained yours truly a top spot on said SERP. D’oh!



Share/bookmark this: del.icio.us • Google • ma.gnolia • Mixx • Netscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

sway(”Google Webmaster Happiness Index”, $numStars, $rant);

Rumors about GWHI are floating around for a while, but not even insiders were able to figure out the formula. As a matter of fact, not a single webmaster outside the Googleplex has ever seen it. I assume Barry’s guess is quite accurate: GWHI-meter

Anyway, I don’t care what it is, or how it works, as long as I can automate it. At first I ran a few tests by retweeting Google related rants, and finally I developed sway(string destination, decimal numStars, string rant). For a while now I’m brain-dumping my rants to Google with a cron job. I had to kill the process a few times until I figured out that $numStars = -5 invokes a multiply by -1 error, but since Google has fixed this bug it runs smoothly, nine to five.

Yesterday I learned that Google launched a manual variant of my method for you mere mortals. I’m excited to share it: HotPot. Nope, it’s not a typo. Hot pot, as in bong. Officially addictive (source).

HotPot’s RTFM

Login with your most disposable Google account, then load http://google.com/hotpot/onboard with your Web browser (API coming soon, so I was told, hence feel free to poll https://google.com/hotpot/rest/sway for an HTTP response code != 503).

The landing page’s search box explains itself: “Enter a category near a familiar neighborhood and city to start rating places you know. Ex. [restaurants Mountain View, CA]”. HotPot search boxOf course localization is in place and working fine (you can change your current address in your Google Profile at any time by providing Checkout with another credit card).

As a webmaster eager to submit GWHI ratings, you’re not interested in over-priced food near the Googleplex, so you overwrite the default category: HotPot search for a search engine in Mountain View, CA

HotPot rating box for a search engine called Google in Mountain View, CAPress the Search button.

On the result page you’ll spot a box featuring Google, with a nice picture of the Googleplex in Mountain View. To convince you that indeed you’ve found the right place to drop your rants, “Google” is written in bold letters all over the building.

To its left, Google HotPot provides tips like

Get smarter SERPs.

Reading your mind we’ve figured out that a particular SERP ranking has pissed you off. You know, rankings can turn out good and bad, even yours. With you rating our rankings, we learn a bit more about your tastes, so you’ll get better SERPs the next time you search.

Next you click on any gray star at the bottom, and magically the promotional image turns into a text area.

HotPot review of a search engine called Google in Mountain View, CA Now tell the almighty Google why your pathetic site deserves better rankings than the popular brands with deep pockets you’re competiting with on the Interwebs.

Don’t make the mistake to mention that you’re cheaper. Google will conclude that goes for your information architecture, crawlability, usability, image resolution and content quality, too. Better mimick an elitist specialist of all professions or so, and sell your stuff as swiss army knife.

Then press the Publish button, and revisit your SERP, again and again.

You’ll be quite astonished.

Google’s webmaster relations team will be quite happy.

I mean, can you think of a better way to turn yourself in with a selfish spam report as an ajax’ed Web form that even comes with stars?

Google’s HotPot is pretty cool, don’t you agree?


Sebastian

spying at:

1600 Amphitheatre Parkway

Mountain View,
CA
94043

USA



Share/bookmark this: del.icio.us • Google • ma.gnolia • Mixx • Netscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

How to spam the hell out of Google’s new source attribution meta elements

The moment you’ve read Google’s announcement and Matt’s question “What about spam?” you concluded “spamming it is a breeze”, right? You’re not alone.

Before we discuss how to abuse it, it might be a good idea to define it within its context, ok?

Playground

First of all, Google announced these meta tags on the official Google News blog  for a reason. So when you plan to abuse it with your countless MFA proxies of Yahoo Answers, you most probably jumped on the wrong band wagon. Google supports the meta elements below in Google News only.

syndication-source

The first new indexer hint is syndication-source. It’s meant to tell Google the permalink of a particular news story, hence the author and all the folks spreading the word are asked to use it to point to the one -and only one- URI considered the source:

<meta name="syndication-source" content="http://outerspace.com/news/ubercool-geeks-launched-google-hotpot.html" />

The meta element above is for instances of the story served from
http://outerspace.com/breaking/page1.html
http://outerspace.com/yyyy-mm-dd/page2.html
http://outerspace.com/news/aliens-appreciate-google-hotpot.html
http://outerspace.com/news/ubercool-geeks-launched-google-hotpot.html
http://newspaper.com/main/breaking.html
http://tabloid.tv/rehashed/from/rss/hot:alien-pot-in-your-bong.html

Don’t confuse it with the cross-domain rel-canonical link element. It’s not about canning duplicate content, it marks a particular story, regardless whether it’s somewhat rewritten or just reprinted with a different headline. It tells Google News to use the original URI when the story can be crawled from different URIs on the author’s server, and when syndicated stories on other servers are so similar to the initial piece that Google News prefers to use the original (the latter is my educated guess).

original-source

The second new indexer hint is original-source. It’s meant to tell Google the origin of the news itself, so the author/enterprise digging it out of the mud, as well as all the folks using it later on, are asked to declare who broke the story:

<meta name="original-source" content="http://outerspace.com/news/ubercool-geeks-launched-google-hotpot.html" />

Say we’ve got two or more related news, like “Google fell from Mars” by cnn.com and “Google landed in Mountain View” by sfgate.com, it makes sense for latimes.com to publish a piece like “Google fell from Mars and landed in Mountain View”. Because latimes.com is a serious newspaper, they credit their sources not only with a mention or even embedded links, they do it machine-readable, too:

<meta name="original-source" content="http://cnn.com/google-fell-from-mars.html" />
<meta name="original-source" content="http://sfgate.com/google-landed-in-mountain-view.html" />

It’s a matter of course that both cnn.com and sfgate.com provide such an original-source meta element on their pages, in addition to the syndication-source meta element, both pointing to their very own coverage.

If a journalist grabbed his breaking news from a secondary source telling “CNN reported five minutes ago that Google’s mothership started from Venus, and the LA Times spotted it crashing on Jupiter”, he can’t be bothered with looking at the markup and locating those meta elements in the head section, he has a deadline for his piece “Why Web search left Planet Earth”. It’s just fine with Google News when he puts

<meta name="original-source" content="http://cnn.com/" />
<meta name="original-source" content="http://sfgate.com/" />

Fine-prints

As always, the most interesting stuff is hidden on a help page:

At this time, Google News will not make any changes to article ranking based on this tags.

If we detect that a site is using these metatags inaccurately (e.g., only to promote their own content), we’ll reduce the importance we assign to their metatags. And, as always, we reserve the right to remove a site from Google News if, for example, we determine it to be spammy.

As with any other publisher-supplied metadata, we will be taking steps to ensure the integrity and reliability of this information.

It’s a field test

We think it is a promising method for detecting originality among a diverse set of news articles, but we won’t know for sure until we’ve seen a lot of data. By releasing this tag, we’re asking publishers to participate in an experiment that we hope will improve Google News and, ultimately, online journalism. […] Eventually, if we believe they prove useful, these tags will be incorporated among the many other signals that go into ranking and grouping articles in Google News. For now, syndication-source will only be used to distinguish among groups of duplicate identical articles, while original-source is only being studied and will not factor into ranking. [emphasis mine]

Spam potential

Well, we do know that Google Web search has a spam problem, IOW even a few so-1999-webspam-tactics still work to some extent. So we tend to classify a vague threat like “If we find sites abusing these tags, we may […] remove [those] from Google News entirely” as FUD, and spam away. Common sense and experience tells us that a smart marketer will make money from everything spammable.

But: we’re not talking about Web search. Google News is a clearly laid out environment. There are only so many sites covered by Google News. Even if Google wouldn’t be able to develop algos analyzing all source attribution attributes out there, they do have the resources to identify abuse using manpower alone. Most probably they will do both.

They clearly told us that they will compare those meta data to other signals. And that’s not only very weak indicators like “timestamp first crawled” or “first heard of via pubsubhubbub”. It’s not that hard to isolate particular news, gather each occurrence as well as source mentions within, and arrange those on a time line with clickable links for QC folks who most certainly will identify the actual source. Even a few spot tests daily will soon reveal the sites whose source attribution meta tags are questionable, or even spammy.

If you’re still not convinced, fair enough. Go spam away. Once you’ve lost your entry on the whitelist, your free traffic from Google News, as well as from news-one-box results on conventional SERPs, is toast.

Last but not least, a fair warning

Now, if you still want to use source attribution meta elements on your non-newsworthy MFA sites to claim owership of your scraped content, feel free to do so. Most probably Matt’s team will appreciate just another “I’m spamming Google” signal.

Not that reprinting scraped content is considered shady any more: even a former president does it shamelessly. It’s just the almighty Google in all of its evilness that penalizes you for considering all on-line content public domain.



Share/bookmark this: del.icio.us • Google • ma.gnolia • Mixx • Netscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

While doing evil, reluctantly: Size, er trust matters.

These Interwebs are a mess. One can’t trust anyone. Especially not link drops, since Twitter decided to break the Web by raping all of its URIs. Twitter’s sloppy URI gangbang became the Web’s biggest and most disgusting clusterfuck in no time.

Evil URI shortenersI still can’t agree to the friggin’ “N” in SNAFU when it comes to URI shortening. Every time I’m doing evil myself at sites like bit.ly, I’m literally vomiting all over the ‘net — in Swahili, er base36 pidgin.

Besides the fact that each and every shortened URI manifests a felonious design flaw, the major concern is that most -if not all- URI shorteners will die before the last URI they’ve shortened is irrevocable dead. And yes, shit happens all day long — RIP tr.im et al.

Letting shit happen is by no means a dogma. We shouldn’t throw away common sense and best practices when it comes to URI management, which, besides avoiding as many redirects as possible, includes risk management:

What if the great chief of Libya all of a sudden decides that gazillions of bit.ly-URIs redirecting punters to their desired smut aren’t exactly compatible to the Qur’an? All your bit.ly URIs will be defunct over night, and because you rely on traffic from places you’ve spammed with your shortened URIs, you’ll be forced to downgrade your expensive hosting plan to a shitty freehost account that displays huge Al-Quaeda or even Weight-Watchers banners above the fold of your pathetic Web pages.

In related news, even the almighty Google just pestered the Interwebs with just another URI shortener’s website: Goo.gl. It promises stability, security, and speed.

Well, at the day it launched, I broke it with recursive chains of redirects, and meanwhile creative folks like Dave Naylor perhaps wrote a guide on “hacking goo.gl for fun and profit”. #abuse

Of course there are bugs in a brand new product. But Google is a company iterating code way faster than most Internet companies, and due to their huge user base and continuous testing under operating conditions they’re aware of most of their bugs. They’ll fix them eventually, and soon goo.gl -as promised- will be “the stablest, most secure, and fastest URL shortener on the Web”.

So, just based on the size of Google’s infrastructure, it seems goo.gl is going to be the most reliable one out of all evil URI shorteners. Kinda queen of all royal PITAs. But is this a good enough reason to actually use goo.gl? Not quite enough, yet.

Go ask a Googler “Can you guarantee that goo.gl will outlive the Internet?”. I got answers like “I agree with your concern. I thought about it myself. But I’m confident Google will try its very best to preserve that”. From an engineer’s perspective, all of them agree with my statement “URI shortening totally sucks ass”. But IRL the Interwebs are flooded with crappy shortURLs, and that’s not acceptable. They figured that URI shortening can’t be eliminated, so it had to be enhanced by a more reliable procedure. Hence bright folks like Muthu Muthusrinivasan, Devin Mullins, Ben D’Angelo et al created goo.gl, with mixed feelings.

That’s why I recommend the lesser evil. Not because Google is huge, has the better infrastructure, picked a better domain, and the whole shebang. I do trust these software engineers, because they think and act like me. Plus, they’ve got the resources.

I’m going goo.gl.
I’ll dump bit.ly etc.

Fineprint: However, I won’t throw away my very own URI shortener, because this evil piece of crap can do things the mainstream URI shorteners -including goo.gl- are still dreaming of, like preventing search angine crawlers from spotting affiliate links and such stuff. Shortening links alone doesn’t equal cloaking fishy links professionally.



Share/bookmark this: del.icio.us • Google • ma.gnolia • Mixx • Netscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Is Google a search engine based in Mountain View, CA (California, USA)?

Is Google a search engine? Honest answer: Dunno. Google might be a search engine. It could be a fridge, too. Or a yet undiscovered dinosaur, a scary purple man-eater, a prescription drug, or my mom’s worst nightmare.

According to the search engine pre-installed in my browser, “Dogpile” is a search engine, and “Bing”, “Altavista”, even “Wikipedia”. Also a tool called “Google Custom Search” and a popular blog titled “Search Engine Land” are considered search engines, besides obscure Web sites like “Ask”, “DuckDuckGo”, “MetaCrawler” and “Yahoo”. Sorry, I can’t work with these suggestions.

So probably I need to perform a localized search to get an answer:

Is Google a search engine based in Mountain View, CA?

0.19 seconds later my browser’s search facility delivers the desired answer, instantly in near lightning speed. The first result for [Is Google a search engine based in Mountain View, CA] lists an entity outing itself as “Google Mountain View”, the second result is “Googleplex”.

Wait … that doesn’t really answer my question. First, the search result page says “near Mountain View”, but I’ve asked for a search engine “in Mountain View”. Second, it doesn’t tell whether Google, or Googleplex for that matter, is a search engine or a Swiss army knife. Third, a suitable answer would be either “yes” or “no”, but certainly not “maybe something that matches a term or two found in your search query could be relevant, hence I throw enough gibberish -like 65635 bytes of bloated HTML/JS code and a map- your way to keep you quiet for a while”.

I’m depressed.

But I don’t give up that easily. The office next door belongs to a detective agency. The detective in charge is willing to provide a little neighborly help, so I send him over to Mountain View to investigate that dubious “Googleplex”. The guy appears to be smart, so maybe he can reveal whether this location hosts a search engine or not.

Indeed, he’s kinda genius. He managed to interview a GoogleGuy working in building 43, who tells that #1 rankings for [search engine] can’t be guaranteed, but #1 rankings for long tail phrases like [Google is a search engine based in Mountain View, California, USA] can be achived by nearly everyone. My private eye taped the conversation with a hidden camera and submitted it to America’s Funniest Home Videos:

One question remains: Why can’t a guy that knowledgable make it happen that his employer appears as first search result for, well, [search engine], or at least [search engine based in Mountain View, California]?? Go figure …

Sorry Matt, couldn’t resist. ;-)

Sebastian

spying at:

1600 Amphitheatre Parkway

Mountain View,
CA
94043

USA



Share/bookmark this: del.icio.us • Google • ma.gnolia • Mixx • Netscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

!knihT

Mantra: There’s no such thing as wisdom of the crowd. Repeat. There’s no such thing as a wisdom of the crowd! You’ve got a brain of your own for a reason.preloading

THINK!There’s a huge difference between Thomas J. Watson’s campaign in the 1920s, which made IBM -as a company gathering intelligent individuals- think big and therefore get big at the end of the day, and the votable daily insane pestering social media, forums, blogs, and whatnot, that we willingly and thoughtlessly consume in today’s information ghetto. The difference is, that nowadays the crowd delegates their thinking to a few well paid ‘early adopters’, bullshitters/prophets, and other conmen who dominate the Interwebs just because they’re loud enough.

In fact, all the hypes celebrated by the dumb crowds distract and mislead you on a daily basis. As a webmaster you really shouldn’t care about ‘latest discoveries’ like LDA and ADL, or search engine FUD reiterated on webmaster hangouts as advice that ‘answers any question’, for that matter.

Not that you can’t get valuable advice out of search engine webmaster guidelines at all. The opposite is true, but you need to read the source, and judge yourself based on your skills and your experience, applying common sense.

Also, there’s other good webmastering advice out there, if you’re willing to seek(needle, haystack=wget(’http://google.com/search?q=seo|sem|webdev|webdesign|webmastering|internet-marketing&num=n‘)). Don’t. Rely on yourself, and your capability to interpret facts, not on speculation spread by ‘authoritative’ sources.

It’s so much easier to join a huge community or two, and to believe/implement/adapt what’s ‘hot’, or what’s repeated often, respectively. Actually, that’s a crappy approach, because the very few small communities that openly discuss things that matter, are out of reach for the average webmaster, chatting and networking protected by /var/inner-circle/private/.htpasswd.

Here are the components of a public webmaster/SEO/IM community, listed by revenues in ascending order (that’s -1 before zero and 1), what equals alleged trustworthiness/importance in descending order:

  • Many fanboys (m) and groupies (f) who don’t have a clue, but vote up everything what an entity listed below suggests. They will even rave speak out at other alien places, if their idols (see below) get outed for bullshitting anywhere. They go by the title of junior members.
  • A few semi-professional whores who operate blogs/forums/aff-programs theirselves, and manage to steal a tiny portion of the floating popularity to feed their pathetic outlets. Those are considered senior members.
  • A handful of shiny rockstars who silenty suck up to their owner master (see below). They may or may not participate monetarily, and have the power of moderators.
  • One single guy who laughs all the way to the bank.

Looked at in full daylight: when you join a crowd you become cannon fodder, and your financial misery is considered collateral damage. Lurking (silently listening to crowds) is not exactly cheaper, and certainly doesn’t make you an unsung hero, because you’ll totally share the crowd’s misery. Your balance sheet doesn’t lie, usually.

Reboot your brain before you jump on popular band wagons. Don’t listen to advice that’s freely available, not even mine (WTF, you know what I mean). If somebody discusses ethics (hat colors), then run for your life, because ethics will kill your revenue. When it comes to SEO, then it helps to evaluate (search engine/any) advice under the premise “what would I do, and what could I achive (technically), if I’d run this SE?”.

It’s all about you. Don’t care about the well beings of search engines that suffer from WebSpam, or the healthiness of affiliate programs that make shitloads of green out of it, but tell you ‘thou shalt not spam’ because they sneakily dominate your SERPs with their own graffity. WebSpam is what gets you banned, everything else just makes you money. Test for yourself, and don’t take advice without proof that you can easily replicate on your very own servers.

Do not risk your earnings -that is your existence!- with strategies and tactics you can’t handle on the long haul, just because some selfish moron tells you so.



Share/bookmark this: del.icio.us • Google • ma.gnolia • MixxNetscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

WTF have Google, Bing, and Yahoo cooking?

Folks, I’ve got good news. As a matter of fact, they’re so good that they will revolutionize SEO. A little bird told me, that the major search engines secretly teamed up to solve the problem of context and meaning as a ranking factor.

They’ve invented a new Web standard that allows content producers to steer search engine ranking algos. Its code name is ADL, probably standing for Aided Derivative Latch, a smart technology based on the groundwork of addressing tidbits of information developed by Hollerith and Neumann decades ago.

According to my sources, ADL will be launched next month at SMX East in New York City. In order to get you guys primed in a timely manner, here I’m going to leak the specs:

WTF - The official SEO standard, supported by Google, Yahoo & Bing

Word Targeting Funnel (WTF) is a set of indexer directives that get applied to Web resources as meta data. WTF comes with a few subsets for special use cases, details below. Here’s an example:

<meta name="WTF" content="document context" href="http://google.com/search?q=WTF" />

This directive tells search engines, that the content of the page is closely related to the resource supplied in the META element’s HREF attribute.

As you’ve certainly noticed, you can target a specific SERP, too. That’s somewhat complicated, because the engineers couldn’t agree which search engine should define a document’s search query context. Fortunately, they finally found this compromise:

<meta name="WTF" content="document context" href="http://google.com/search?q=WTF || http://www.bing.com/search?q=WTF || http://search.yahoo.com/search?q=WTF" />

As far as I know, this will even work if you change the order of URIs. That is, if you’re a Bing fanboy, you can mention Bing before Google and Yahoo.

A more practical example, taken from an affiliate’s sales pitch for viagra that participated in the BETA test, leads us to the first subset:

Subset WTFm — Word Targeting Funnel for medical terms

<meta name="WTF" content="document context" href="http://www.pfizer.com/files/products/uspi_viagra.pdf" />

This directive will convince search engines that the offered product indeed is not a clone like Cialis.

Subset WTFa — Word Targeting Funnel for acronyms

<meta name="WTFa" content="WTF" href="http://www.wtf.org/" />

When a Web resource contains the acronym “WTF”, search engines will link it to the World Taekwondo Federation, not to Your Ranting and Debating Resource at www.wtf.com.

Subset WTFo — Word Targeting Funnel for offensive language

<meta name="WTFo" content="meaning of terms" href="http://www.noslang.com/" />

If a search engine doesn’t know the meaning of terms I really can’t quote here, it will lookup the Internet Slang Directory. You can define alternatives, though:

<meta name="WTFo" content="alternate meaning of terms" href="http://dictionary.babylon.com/language/slang/low-life-glossary/" />

WTF, even more?

Of course we’ve got more subsets, like WTFi for instant searches. Because I appreciate unfair advantages, I won’t reveal more. Just one more goody: it works for PDF, Flash content and heavily ajax’ed stuff, too.

This is the very first newish indexer directive that search engines introduce with support for both META elements and HTTP headers as well. Like with the X-Robots-Tag, you can use an X-WTF-Tag HTTP header:
X-WTF-Tag: Name: WTFb, Content: SEO Bullshit, Href: http://seobullshit.com/

 

 

As for the little bird, well, that’s a lie. Sorry. There’s no such bird. It’s bugs I left last time I visited Google’s labs:
<meta name="WTF" content="bug,bugs,bird,birds" href="http://www.spylife.com/keysnoop.html" />



Share/bookmark this: del.icio.us • Google • ma.gnolia • Mixx • Netscape • reddit • Sphinn • Squidoo • StumbleUpon • Yahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28  Next Page »