Archived posts from the 'Crap' Category

One out of many sure-fire ways to avoid blog comments

ranting on idiotic comment form designsIf your name is John Doe and you don’t blog this rant is not for you, because you don’t suffer from truncated form field values. Otherwise check here whether you annoy comment authors on your blog or not. “Annoy” is the polite version by the way, I’m pissed on 99% of the blogs I read. It took me years to write about this issue eventually. Today I had enough.

Look at this form designed especially for John Doe (john@doe.com) at http://doe.com/, then duplicated onto all blogs out there, and imagine you’re me going to comment on a great post:

I can’t view what I’ve typed in, and even my browser’s suggested values are truncated because the input field is way too narrow. Sometimes I leave post-URLs with a comment, so when I type in the first characters of my URL, I get a long list of shortened entries from which I can’t select anything. When I’m in a bad mood I swear and surf on without commenting.

I’ve looked at a fair amount of WordPress templates recently, and I admit that crappy comment forms are a minor issue with regard to the amount of duplicated hogwash most theme designers steal from each other. However, I’m sick of crappy form usability, so I’ve changed my comment form today:

Now the input fields should display the complete input values in most cases. My content column is 500 pixels wide, so size="42" leaves enough space when a visitor surfs with bigger fonts enlarging the labels. If with very long email addresses or URLs that’s not enough, I’ve added title attributes and onchange triggers which display the new value as tooltip when the visitors navigates to the next input field. Also I’ve maxed out the width of the text area. I hope this 60 seconds hack improves the usability of my comment form.

When do you fire up your editor and FTP client to make your comment form convenient? Even tiny enhancements can make your visitors happier.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

If you’re not an Amway millionaire avoid BlogRush like the plague!

Do not click BlogRush affiliate links before you’re fully awake. Oh no, you did it … now praise me because I’ve sneakily disabled the link and read on.
BlogRush

    My BlogRush Summary:

  1. You won’t get free targeted traffic to your niche blog.
  2. You’ll make other people rich.
  3. You’ll piss off your readers.
  4. You’ll promote BlogRush and get nothing in return.
  5. You shouldn’t trust a site fucking up the very first HTTP request.
  6. Pyramid schemes just don’t work for you.

You won’t get free targeted traffic to your niche blog

The niches you can choose from are way too broad. When you operate a niche blog like mine, you can choose “Marketing” or “Computers & Internet”. Guess what great traffic you gain with headlines about elegant click tracking or debunking meta refresh myths from blogs selling MySpace templates to teens or RFID chips to wholesalers? In reality you get hits via blogs selling diet pills to desperate housewives (from my referrer stats!) or viagra to old age pensioners, if you see a single BlogRush referrer in your stats at all. (I’ve read a fair amount of the hype about actually targeted headline delivery in BlogRush widgets. I just don’t buy it from what I see on blogs I visit.)

You’ll make other people rich

Look at the BlogRush widget in your or my sidebar, then visit lots of other niche blogs which are focused more or less on marketing related topics. All these widgets carry ads for generic marketing blogs pitching just another make me rich on the Internet while I sleep scheme or their very own affiliate programs. These blogs, all early adopters, will hoard BlogRush’s traffic potentials. Even if you can sign up at the root to place you at the top of the pyramid referral structure, you can’t avoid that the big boys with gazillions of owed impressions in BlogRush’s “marketing” queue dominate all widgets out there, your’s included. (I heard that John Reese will try to throw a few impressions on tiny blogs before niche bloggers get upset. I doubt that will be enough to keep his widgets up.)

You’ll piss off your readers

Even if some of your readers recognize your BlogRush widget, they’ll wonder why you recommend totally unrelated generic marketing gibberish on your nicely focused blog. Yes, every link you put on your site is a recommendation. You vouch for this stuff when you link out, even when you don’t control the widget’s content. Read Tamar’s Why the Fuss about BlogRush? to learn why this clutter is useless for your visitors. Finally, the widget slows your site down and your visitors hate long loading times.

You’ll promote BlogRush and get nothing in return

When you follow the advice handed out by BlogRush and pitch their service with posts and promotional links on your blog, then you help BlogRush to skyrocket at the search engines. That will bring them a lot of buzz, but you get absolute nothing for your promotional efforts because your referrer link doesn’t land on the SERPs.

You shouldn’t trust a site fucking up the very first HTTP request

Ok, that’s a geeky issue and you don’t need to take it very seriously. Request your BlogRush affiliate link with a plain user agent not accepting cookies or executing client sided scripting, then read the headers. BlogRush does a 302 redirect to their home page rescuing your affiliate ID in an unclosed base href directive. Chances are you’ll never get the promised credits from upsold visitors using uncommon user agents respectively browser settings, because they don’t manage their affiliate traffic properly.

Pyramid schemes just don’t work for you

Unfortunately, common sense is not as common as you might think. I’m guilty of that too, but I’ll leave my widget up for a while to monitor what it brings in. The promise of free traffic is just too alluring, and in fact you can’t lose much. If you want, experiment with it and waste some ad space, but pull it once you’ve realized that it’s not worth it.

Disclaimer

This post was inspired by common sense, experience of life, and a shitload of hyped crap posts on Sphinn’s upcoming list where folks even created multiple accounts to vote their BlogRush sales pitches to the home page. If anything I’ve said here is not accurate or at least plausible, please submit a comment to set the records straight.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

How to fuck up click tracking with the JavaScript onclick trigger

Fuck up click trackingThere’s a somewhat heated debate over at Sphinn and many other places as well where folks call each other guppy and dumbass try to figure out whether a particular directory’s click tracking sinks PageRank distribution or not. Besides interesting replies from Matt Cutts, an essential result of this debate is that Sphinn will implement a dumbass button.

Usually I wouldn’t write about desperate PageRank junkies going cold turkey, not even as a TGIF post, but the reason why this blog directory most probably doesn’t pass PageRank is interesting, because it has nothing to do with onclick myths. Of course the existence of an intrinsic event handler (aka onclick trigger) in an A element alone has nothing to do with Google’s take on the link’s intention, hence an onclick event itself doesn’t pull a link’s ability to pass Google-juice.

To fuck up your click tracking you really need to forget everything you’ve ever read in Google’s Webmaster Guidelines. Unfortunately, Web developers usually don’t bother reading dull stuff like that and code the desired functionality in a way that Google as well as other search engines puke on the generated code. However, ignorance is no excuse when Google talks best practices.

Lets look at the code. Code reveals everything and not every piece of code is poetry. That’s crap:
.html: <a href="http://sebastians-pamphlets.com"
id="1234"
onclick="return o('sebastians-blog');">
http://sebastians-pamphlets.com</a>

.js: function o(lnk){ window.open('/out/'+lnk+'.html'); return false; }

The script /out/sebastians-blog.html counts the click and then performs a redirect to the HREF’s value.

Why can and most probably will Google consider the hapless code above deceptive? A human visitor using a JavaScript enabled user agent clicking the link will land exactly where expected. The same goes for humans using a browser that doesn’t understand JS, and users surfing with JS turned off. A search engine crawler ignoring JS code will follow the HREF’s value pointing to the same location. All final destinations are equal. Nothing wrong with that. Really?

Nope. The problem is that Google’s spam filters can analyze client sided scripting, but don’t execute JavaScript. Google’s algos don’t ignore JavaScript code, they parse it to figure out the intent of links (and other stuff as well). So what does the algo do, see, and how does it judge eventually?

It understands the URL in HREF as definitive and ultimate destination. Then it reads the onclick trigger and fetches the external JS files to lookup the o() function. It will notice that the function returns an unconditional FALSE. The algo knows that the return value FALSE will not allow all user agents to load the URL provided in HREF. Even if o() would do nothing else, a human visitor with a JS enabled browser will not land at the HREF’s URL when clicking the link. Not good.

Next the window.open statement loads http://this-blog-directory.com/out/sebastians-blog.html, not http://sebastians-pamphlets.com (truncating the trailing slash is a BS practice as well, but that’s not the issue here). The URLs put in HREF and built in the JS code aren’t identical. That’s a full stop for the algo. Probably it does not request the redirect script http://this-blog-directory.com/out/sebastians-blog.html to analyze its header which sends a Location: http://sebastians-pamphlets.com line. (Actually, this request would tell Google that there’s no deceiptful intent, just plain hapless and overcomplicated coding, what might result in a judgement like “unreliable construct, ignore this link” or so, depending on other signals available).

From the algo’s perspective the JavaScript code performs a more or less sneaky redirect. It flags the link as shady and moves on. Guess what happens in Google’s indexing process with pages that carry tons of shady links … those links not passing PageRank sounds like a secondary problem. Perhaps Google is smart enough not to penalize legit sites for, well, hapless coding, but that’s sheer speculation.

However, shit happens, so every once in a while such a link will slip thru and may even appear in reverse citation results like link: searches or Google Webmaster Central link reports. That’s enough to fool even experts like Andy Beard (maybe Google even shows bogus link data to mislead SEO researches of any kind? Never mind).

Ok, now that we know how not to implement onclick click tracking, here’s an example of a bullet-proof method to track user clicks with the onclick event:
<a href="http://sebastians-pamphlets.com/"
id="link-1234"
onclick="return trackclick(this.href, this.name);">
Sebastian's Pamphlets</a>
trackclick() is a function that calls a server sided script to store the click and returns TRUE without doing a redirect or opening a new window.

Here is more information on search engine friendly click tracking using the onlick event. The article is from 2005, but not outdated. Of course you can add onclick triggers to all links with a few lines of JS code. That’s good practice because it avoids clutter in the A elements and makes sure that every (external) link is trackable. For this more elegant way to track clicks the warnings above apply too: don’t return false and don’t manipulate the HREF’s URL.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

If you free-host your blog flee now!

Run away as fast as you can!Dear free hosted blogger, here is food for thoughts, err my few good reasons to escape the free-hosted blogging hell.

If you don’t own the domain, you don’t own your content. In other words: all free hosts steal content, skim traffic, share your reputation, and whatnot indulging the evil side of life 2.0. You get what you pay for, and you pay with your contents, your reputation, and a share of your traffic.

Of course you’ve the copyrights, but not the full power of disposition. Sooner or later –rather sooner if your blogging experiment becomes a passion and your blog an asset– you want to leave the free host, at least if you don’t decide to abandon your blog. At this point you’ll spot that your ideal blogging world is a dungeon in reality, from which you can’t escape saving your bacon.

For the sake of this discussion I don’t need to argue with worst case scenarios like extremely crappy free hosts which skim a fair amount of your traffic to sell it or feed their cash cows with, plaster your pages with their ads, don’t offer content export interfaces, and more crap like that. I’m talking about a serious operation, the free blogging service from the very nice folks at Google.

Web content is more than a piece of text you wrote. A piece of text anywhere on the ‘net has absolutely no value without the votes which make it findable. Hence the links pointing to a blog post, the feed subscriptions, the comments, and your text build the oneness we refer to as content.

Your text lives in Blogger’s database, which you can tap through the API. Say you want to move your blog to WordPress, what can you pull using the WordPress Blogger importer? Posts and comments, but not all properties of the comments, and some comments are damaged.

  • Many of your commenters are signed in with Blogger, commenting under their blogger account, so you don’t get their email addresses and URLs.
  • Even comments where the author typed in an email addy and URL come in with the author’s name only. I admit that may be a flaw in the WordPress script, but it sucks.
  • Blogger castrates links on save, so links embedded in comments are nofollow’ed. Adding nofollow crap to moderated comments on the fly is evil enough, but storing and exporting condomized comments contributed to your posts counts as damage to property.

According to Google’s very own rules the canonical procedure to move a page to another URL is a permanent redirect. Blogger like most “free” platforms doesn’t allow server sided scripting, so you can’t 301-redirect your posts from blogspot.com to your new blog’s pages. Blogger’s technical flaws (the permalink variable is not yet populated in the HEAD section of the template, hence it can’t be used to redirect to the actual posts’s new location with a zero meta refresh) dilute each post’s PageRank because it can’t be transferred to its new location directly. Every hop (internal link on the new blog pointing to the post from the meta redirect’s destination page) devours a portion of the post’s PageRank.

The missing capability to redirect properly, that is page by page, from blogspot to another blog hinders traffic management a blogger should be able to do, and results in direct as well as indirect traffic losses. It’s still your content, but you’ve not the full power of disposition, and that’s theft, err part of your payment for hosting and services.

PageRank is computed based on a model emulating surfing behavior. Following this theory a loss of PageRank equals a loss of human traffic. The reality is, that you don’t lose traffic in theory. You lose a fair amount of visitors who have clicked a link to a particular post and land through the all pages to one URL redirect on a huge page carrying links pointing to all kind of posts, exactly there, on the links page. A surfer not getting the expected content hits the back button faster than you can explain why this shabby redirect is not your fault. And yes, PageRank is a commodity. The post’s new location will suffer from a loss of search engine traffic, because PageRank is a ranking factor too.

As defined above, a post’s inbound links as well as the PageRank gained thereof belongs to the post, and Blogger steals takes away a fair amount of that when you move away from blogspot. Blogger also steals collects the fee (link love and, in case you move, click throughs from the author’s link) you owe your commenters for contributing content to your blog, regardless whether you stay or go away.

Of course you can jump through overcomplicated hoops by first transferring the blogger blog to its own domain, publishing it there for a while before you install WordPress over the Blogger blog. Blogger’s domain mapping will then do page by page redirects, but you’re stuck with the crappy url structure (timestamp fragments in post URLs). I mean, when I want to cross a street, is it fair to tell me that I can do that anytime but if I’d like to arrive unhurt then I must take the long route, that is a turnabout for an orbit around the earth?

Having said that, there are a few more disadvantages with Blogger even before you move to another platform on your own domain.

  • Blogger inserts links seducing your visitors into leaving your blog, and links to itself (powered by Blogger images) sucking your PageRank.
  • If you have to change a post’s title, Blogger changes the URL too. You can’t avoid that, so all exisiting traffic lands on Blogger’s very own 404 page on blogger.com. The 404 page should be part of the template, hosted on yourblog.blogspot.com, so that you can keep your visitors.
  • Commenting on a Blogger blog is a nightmare with regard to usability, so you miss out on shitloads of user contributed contents.
  • Blogger throws nofollow crap on your comments like confetti, even when you’ve turned comment moderation and captchas on, what should prove that you’ve control over outgoing links in the comments.
  • There is a saboteur in Google’s Blogger team. Every now and then Blogger inserts “noindex” meta tags, even on Google’s very own blogs, or silently deindexes your stuff at all search engines in other ways.
  • Often the overcrowded servers of blogger.com and/or blogspot.com are so slow, that you can’t post nor approve comments, and your visitors don’t get more than the hourglass for 30 minutes and then all of a sudden a fragment of broken XML. This unreliable behavior does not exactly support your goal of building a loyal readership and keeping recurring visitors happy. You suffer when by accident a few blogs on your shared box get slashdotted, digged, stumbled …, and Blogger can’t handle those spikes.
  • Ok, better don’t get me started on a Blogger rant … ;)

By the way we’re in the same boat. When I started my blogging experiment in 2005 I was lazy enough to choose Blogger, although after many years of webmastering, providing Webmaster support and rescuing contents from (respectively writing off contents on) free hosts I should have known that I was going to run into serious troubles. So do yourself a favor and flee now. Blogger is not meant as a platform for professional blogs. It’s Google’s content generator for AdSense. That’s a fair deal with personal blogs, but unacceptable for corporate blogs.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Buying cheap viagra algorithmically

Since Google can’t manage to clean up [Buy cheap viagra] let’s do it ourselves. Go seek a somewhat trusted search blog mentioning “buy cheap viagra” somewhere in the archives and link to the post with a slightly diversified anchor text like “how to buy cheap viagra online“. Matt deserves a #1 spot by the way so spread many links …

Then when Matt is annoyed enough and Google has kicked out the unrelated stuff from this search hopefully my viagra spam will rank as deserved again ;)

Update a few hours later: Matt ranks #1 for [buy cheap viagra algorithmically]:
Matt Cutts's first spot for [buy cheap viagra algorithmically]
His ranking for [buy cheap viagra] fell about 10 positions to #17 but for [buy cheap viagra online] he’s still on the first SERP, now at position #10 (#3 yesterday). Interesting. It seems that Google’s newish turbo-blog-indexing influences the rankings of pages linked from blog posts relatively short dated but not exactly long lasting.

Related posts:
Negative SEO At Work: Buying Cheap Viagra From Google’s Very Own Matt Cutts - Unless You Prefer Reddit? Or Topix? by Fantomaster
Trust + keywords + link = Good ranking (or: How Matt Cutts got ranked for “Buy Cheap Viagra”) by Wiep



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Why eBay and Wikipedia rule Google’s SERPs

It’s hard to find an obscure search query like [artificial link] which doesn’t deliver eBay spam or a Wikipedia stub within the first few results at Google. Although both Wikipedia and eBay are large sites, the Web is huge, so two that different sites shouldn’t dominate the SERPs for that many topics. Hence it’s safe to say that many nicely ranked search results at Googledia, pulled from eBaydia, are plain artificial positioned non-results.

Curious why my beloved search engine fails so badly, I borrowed a Google-savvy spy from GHN and sent him to Mountain View to uncover the eBaydia ranking secrets. He came back with lots of pay-dirt scraped from DVDs in the safe of building 43. Before I sold Google’s ranking algo to Ask (the price Yahoo! and MSN offered was laughable), I figured out why Googledia prefers eBaydia from comments in the source code. Here is the unbelievable story of a miserable failure:

When Yahoo! launched Mindset, Larry Page and Sergey Brin threw chairs out of anger because Google wasn’t able to accomplish such a simple task. The engineers, eager to fulfill their founder’s wishes asap, tried to integrate mindset-functionality without changing Google’s fascinating simple search interface (that means without a shopping/research slider). Personalized search still lived in the labs, but provided a somewhat suitable API (mega beta): scanSearchersBrainForContext([search query]). Not knowing that this function of personalized search polls a nano-bugging-device (pre alpha) which Google had not yet released nor implemented into any searcher’s brain at this time, they made use of that piece of experimental code to evaluate the search query’s context. Since the method always returned “false”, though they had to deliver results quickly, they made up some return values to test their algo tweaks:

/* debug - praying S&L don't throw more chairs */
if (scanSearchersBrainForContext($searchQuery) === false) then {
$contextShopping = “%ebay%”;
$contextResearch = “%wikipedia%”;
$context = both($contextShopping, $contextResearch);
}
else {[pretty complex algo])

This worked fine and found its way into the ranking algo under time pressure. The result is that with each and every search query where a page from eBay and/or Wikipedia is in the raw result set, those get a ranking boost. Sergey was happy because eBay is generally listed on page #1, and Larry likes the Wikipedia results on the first SERP. Tell me why the heck should the engineers comment out these made up return values? No engineer on this planet likes flying chairs, especially not in his office.

PS: Some SEOs push Wikipedia stubs too.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Which Sebastian Foss is a spammer?

Obviously pissed by my post Fraud from the desk of Sebastian Foss, Sebastian Foss sent this email to Smart-IT-Consulting.com:

Remove your insults from your blog about my products and sites… as you may know promote-biz.net is not registered to my name or my company.. just look it up in some whois service. This is some spammer who took my software and is now selling it on his spammer websites. Im only selling my programs under their original .com domains and you did not receive any email from me since im only using doube-optin lists.

You may not know it - but insulting persons and spreading lies is under penalty.

Sebastian Foss
Sebastian Foss e-trinity Marketing Inc.
sebastian@etrinity-mail.com

Well, that’s my personal blog, and I’ve a professional opinion about the software Sebastian Foss sells, more on that later. It’s public knowledge that spammers do register domains under several entities to obfuscate their activities. I’m not a fed, and I’m not willing to track down each and every multiple respectively virtual personality of a spammer, so I admit that there’s at least a slight possibility that the Sebastian Foss spamming my inbox from promote-biz.net is not the Sebastian Foss who wrote and sells the software promoted by the email spammer Sebastian Foss. Since I still receive email spam from the desk of Sebastian Foss at promote-biz.net, I think there’s no doubt that this Sebastian Foss is a spammer. Well, Sebastian Foss himself calls him a spammer, and so do I. Confused? So am I. I’ll update my other post to reflect that.

Now that we’ve covered the legal stuff, lets look at the software from the desk of Sebastian Foss.

  • Blog Blaster claims to submit “ads” to 2,000,000 sites. Translation: Blog Blaster automatically submits promotional comments to 2 million blogs. The common description of this kind of “advertising” is comment spam.
    Sebastian Foss tells us that “Blog Blaster will automatically create thousands of links to your website - which will rank your website in a top 10 position!”. The common description of this link building technique is link spam.
    The sales pitch signed by Sebastian Foss explains “I used it [Blog Blaster] to promote my other website called ezinebroadcast.com and Blog Blaster produced thousands of links to ezinebroadcast.com - resulting in a #1 position in Google for the term “ezine advertising service”. So I understand that Sebastian Foss admits that he is a comment spammer and a link spammer.
    I’d like to see the written permissions of 2,000,000 bloggers allowing Sebastian Foss and his customers to spam their blogs: “Advertising using Blog Blaster is 100% SPAM FREE advertising! You will never be accused of spamming. Your ads are submitted to blogs whose owners have agreed to receive your ads.” Laughable, and obviously a lie. Did Sebastian Foss remember that “spreading lies is under penalty”? Take care, Sebastian Foss!
  • Feed Blaster with a very similar sales pitch aims to create the term feed spam. Also, it seems that FeedBlaster™ is a registered trademark of DigitalGrit Inc. And I don’t think that Microsoft, Sun and IBM are happy to spot their logos on Sebastian Foss’ site e-trinity Internetmarketing GmbH
  • The Money License System aka Google Cash Machine seems to slip through a legal loophole. May be it’s not explicit illegal to sell software build to to trick Google Adwords respectively AdSense or ClickBank, but using it will result in account terminations and AFAIK legal actions too.
  • Instant Booster claims to spam search engines, and it does, according to many reports. The common term applied to those techniques is web spam.

All these domains (and there are countless more sites selling similar scams from the desk of Sebastian Foss) are registered by Sebastian Foss respectively his companies e-trinity Internetmarketing GmbH or e-trinity Marketing Inc.

He’s in the business of newsgroup spam, search engine spam, comment spam … probably there’s no target left out. Searching for Sebastian Foss scam and similar search terms leads to tons of rip-off reports.

He’s even too lazy to rephrase his sales pitches, click a few of the links provided above, then search for quoted phrases you saw on every sales pitch to get the big picture. All that may be legal in Germany, I couldn’t care less, but it’s not legit. Creating and selling software for the sole purpose of spamming makes the software vendor a spammer. And he’s proud of it. He openly admits that he uses his software to spam blogs, search engines, newsgroups and whatever. He may make use of affiliates and virtual entities who send out the email spam, perhaps he got screwed by a chinese copycat selling his software via email spam, but is that relevant when the product itself is spammy?

What do you think, is every instance of Sebastian Foss a spammer? Feel free to vote in the comments.

Update 08/01/2007 Here is the next email from the desk of Sebastian Foss:

Hi,
thanks for the changes on your blog entry - however like i mentioned if you look up the domains which were advertised in the spam mails you will notice that they are not registered to me or my company. You can also see that visiting the sites you will see some guy took my products and is selling them for a lower price on his own websites where he is also copying all of my graphic files. The german police told me that they are receiving spam from your forms and that it goes directly to their trash… however please remove your entries about me from your blog - There is no sense in me selling my own products for a lower price on some cheap, stolen websites - if that would make sense then why do i have my own .com domains for my products ? I just want to make clear that im not sending out any spam mails - please get back to me.

Thanks,
Sebastian

Sebastian Foss
e-trinity Internetmarketing GmbH
sebastian@etrinity-mail.com

It deserves just a short reply:

It makes perfect sense to have an offshore clone in China selling the same outdated and pretty much questionable stuff a little cheaper. This clone can do that because first there’s next to no costs like taxes and so on, and second he does it per spamming my inbox on a daily base, hence probably he sells a lot of the ‘borrowed’ stuff. Whether or not the multiple Sebastian Fosses are the same natural person is not my problem. I claim nothing but leave it up to you dear reader’s speculation, common sense, and probability calculation.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Blogger abuses rel-nofollow due to ignorance

I had planned a full upgrade of this blog to the newest blogger version this weekend. The one and only reason to do the upgrade was the idea that I perhaps could disable the auto-nofollow functionality in the comments. Well, what I found was a way to dofollow the author’s link by editing the <dl id='comments-block'> block, but I couldn’t figure out how to disable the auto-nofollow in embedded links.

Considering the hassles of converting all the template hacks into the new format, and the risk of most probably losing the ability to edit code my way, I decided to stick with the old template. It just makes no sense for me to dofollow the author’s link, when a comment author’s links within the content get nofollow’ed automatically. Andy Beard and others will hate me now, so let me explain why I don’t move this blog to my own domain using a not that insane software like WordPress.

  • I own respectively author on various WordPress blogs. Google’s time to index for posts and updates from this blogspot thingy is 2-3 hours (Web search, not blog search). My Wordpress blogs, even with higher PageRank, suffer from a way longer time to index.
  • I can’t afford the time to convert and redirect 150 posts to another blog.
  • I hope that Google/Blogger can implement reasonable change requests (most probably that’s just wishful thinking).

That said, WordPress is a way better software than Blogger. I’ll have to move this blog if Blogger is not able to fulfill at least my basic needs. I’ll explain below why I think that Blogger lacks any understanding of the rel-nofollow semantics. In fact, they throw nofollow crap on everything they get a hand on. It seems to me that they won’t stop jeopardizing the integrity of the Blogosphere (at least where they control the linkage) until they get bashed really hard by a Googler who understands what rel-nofollow is all about. I nominate Matt Cutts, who invented and evolved it, and who does not tolerate BS.

So here is my wishlist. I want (regardless of the template type!)

  • A checkbox “apply rel=nofollow to comment author links”
  • A checkbox “apply rel=nofollow to links within comment text”
  • To edit comments, for example to nofollow links myself, or to remove offensive language
  • A checkbox “apply rel=nofollow to links to label/search pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to label/search pages”
  • A checkbox “apply rel=nofollow to links to archive pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to archive pages”
  • A checkbox “apply rel=nofollow to backlink listings”

As for the comments functionality, I’d understand when these options get disabled when comment moderation is set to off.

And here are the nofollow-bullshit examples.

  • When comment moderation and captchas are activated, why are comment author links as well as links within the comments nofollow’ed? Does blogger think their bloggers are minor retards? I mean, when I approve a comment, then I do vouch for it. But wait! I can’t edit the comment, so a low-life link might slip through. Ok, then let me edit the comments.
  • When I’ve submitted a comment, the link to the post is nofollowed. Nofollow insane II.This page belongs to the blog, so why the fudge does Blogger nofollow navigational links? And if it makes sense for a weird reason not understandable by a simple webmaster like me, why is the link to the blog’s main page as well as the link to the post one line below not nofollow’ed? Linking to the same URL with and without rel-nofollow on the same page deserves a bullshit award.
  • Nofollow insane III. (dashboard)On my dashbord Blogger features a few blogs as “Blogs Of Note”, all links nofollow’ed. These are blogs recommended by the Blogger crew. That means they have reviewed them and the links are clearly editorial content. They’re proud of it: “we’ve done a pretty good job of publishing a new one each day”. Blogger’s very own Blogs Of Note blog does not nofollow the links, and that’s correct.

    So why the heck are these recommended blogs nofollow’ed on the dashboard? Nofollow insane III. (blogspot)

  • Blogger inserted robots meta tags “nofollow,noindex” on each and every blog hosted outside the controlled blogspot.com domain earlier this year.
  • Blogger inserted robots meta tags “nofollow,noindex” on Google blogs a few days ago.

If Blogger’s recommendation “Check google.com. (Also good for searching.)” is a honest one, why don’t they invest a few minutes to educate themselves on rel-nofollow? I mean, it’s a Google-block/avoid-indexing/ranking-thingy they use to prevent Google.com users from finding valuable contents hosted on their own domains. And they annoy me. And they insult their users. They shouldn’t do that. That’s not smart. That’s not Google-ish.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Danny Sullivan did not strip for Matt Cutts

Nope, this is not recycled news. I’m not referring to Matt asking Danny to strip off his business suit, although the video is really funny. I want to comment on something Matt didn’t say recently, but promised to do soon (again).

Danny Sullivan stripped perfectly legit code from Search Engine Land because he was accused to be a spammer, although the CSS code in question is in no way deceitful.

StandardZilla slams poor Tamar just reporting a WebProWorld thread, but does an excellent job in explaining why image replacement is not search engine spam but a sound thing to do. Google’s recently updated guidelines need to tell more clearly that optimizing for particular user agents is not considered deceitful cloaking per se. This would prevent Danny from stripping (code) not for Matt or Google but for lurid assclowns producing canards.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Blasting mount email

I’ve moved 5k emails, mostly unread, from my inbox to a “swamped” folder. I hope a couple of new filters will help avoiding such drastic measures in the future. So if I owe you an answer: I apologize, please resend your message. Thanks.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6  Next Page »