Archived posts from the 'Google' Category

Buying cheap viagra algorithmically

Since Google can’t manage to clean up [Buy cheap viagra] let’s do it ourselves. Go seek a somewhat trusted search blog mentioning “buy cheap viagra” somewhere in the archives and link to the post with a slightly diversified anchor text like “how to buy cheap viagra online“. Matt deserves a #1 spot by the way so spread many links …

Then when Matt is annoyed enough and Google has kicked out the unrelated stuff from this search hopefully my viagra spam will rank as deserved again ;)

Update a few hours later: Matt ranks #1 for [buy cheap viagra algorithmically]:
Matt Cutts's first spot for [buy cheap viagra algorithmically]
His ranking for [buy cheap viagra] fell about 10 positions to #17 but for [buy cheap viagra online] he’s still on the first SERP, now at position #10 (#3 yesterday). Interesting. It seems that Google’s newish turbo-blog-indexing influences the rankings of pages linked from blog posts relatively short dated but not exactly long lasting.

Related posts:
Negative SEO At Work: Buying Cheap Viagra From Google’s Very Own Matt Cutts - Unless You Prefer Reddit? Or Topix? by Fantomaster
Trust + keywords + link = Good ranking (or: How Matt Cutts got ranked for “Buy Cheap Viagra”) by Wiep



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Getting the most out of Google’s 404 stats

The 404 reports in Google’s Webmaster Central panel are great to debug your site, but they contain URLs generated by invalid –respectively truncated– URL drops or typos of other Webmasters too. Are you sick of wasting the link love from invalid inbound links, just because you lack a suitable procedure to 301-redirect all these 404 errors to canonical URLs?

Your pain ends here. At least when you’re on a *ix server running Apache with PHP 4+ or 5+ and .htaccess enabled. (If you suffer from IIS go search another hobby.)

I’ve developed a tool which grabs all 404 requests, letting you map a canonical URL to each 404 error. The tool captures and records 404s, and you can add invalid URLs from Google’s 404-reports, if these aren’t recorded (yet) from requests by Ms. Googlebot.

It’s kinda layer between your standard 404 handling and your error page. If a request results in a 404 error, your .htaccess calls the tool instead of the error page. If you’ve assigned a canonical URL to an invalid URL, the tool 301-redirects the request to the canonical URL. Otherwise it sends a 404 header and outputs your standard 404 error page. Google’s 404-probe requests during the Webmaster Tools verification procedure are unredirectable (is this a word?).

Besides 1:1 mappings of invalid URLs to canonical URLs you can assign keywords to canonical URLs. For example you can define that all invalid requests go to /fruit when the requested URI or the HTTP referrer (usually a SERP) contain the strings “apple”, “orange”, “banana” or “strawberry”. If there’s no persistent mapping, these requests get 302-redirected to the guessed canonical URL, thus you should view the redirect log frequently to find invalid URLs which deserve a persistent 301-redirect.

Next there are tons of bogus requests from spambots searching for exploits or whatever, or hotlinkers, resulting in 404 errors, where it makes no sense to maintain URL mappings. Just update an ignore list to make sure those get 301-redirected to example.com/goFuckYourself or a cruel and scary image hosted on your domain or a free host of your choice.

Everything not matching a persistent redirect rule or an expression ends up in a 404 response, as before, but logged so that you can define a mapping to a canonical URL. Also, you can use this tool when you plan to change (a lot of) URLs, it can 301-redirect the old URL to the new one without adding those to your .htaccess file.

I’ve tested this tool for a while on a couple of smaller sites and I think it can get trained to run smoothly without too many edits once the ignore lists etcetera are up to date, that is matching the site’s requisites. A couple of friends got the script and they will provide useful input. Thanks! If you’d like to join the BETA test drop me a message.

Disclaimer: All data get stored in flat files. With large sites we’d need to change that to a database. The UI sucks, I mean it’s usable but it comes with the browser’s default fonts and all that. IOW the current version is still in the stage of “proof of concept”. But it works just fine ;)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google helps those who help themselves

And if that’s not enough to survive on Google’s SERPs, try Google’s Webmaster Forum where you can study Adam Lasnik’s FAQ which covers even questions the Webmaster Help Center provides no comprehensive answer for (yet), and where Googlers working in Google’s Search Quality, Webspam, and Webmaster Central teams hang out. Google dumps all sorts of questioners to the forum, where a crowd of hardcore volunteers (aka regulars as Google calls them) invests a lot of time to help out Webmasters and site owners facing problems with the almighty Google.

Despite the sporadic posts by Googlers, the backbone of Google’s Webmaster support channel is this crew of regulars from all around the globe. Google monitors the forum for input and trends, and intervenes when the periodic scandal escalates every once in a while. Apropos scandal … although the list of top posters mentions a few of the regulars, bear in mind that trolls come with a disgusting high posting cadency. Fortunately, currently the signal drowns the noise (again), and I appreciate very much that the Googlers participate more and more.

Some of the regulars like seo101 don’t reveal their URLs and stay anonymous. So here is an incomplete list of folks giving good advice:

If I’ve missed anyone, please drop me a line (I stole the list above from JLH and Red Cardinal, so it’s all their fault!).

So when you’re a Webmaster or site owner, don’t hesitate to post your Google related question (but read the FAQ before posting, and search for your topics), chances are one of these regulars or even a Googler offers assistance. Otherwise when you’re questionless carrying a swag of valuable answers, join the group and share your knowledge. Finally, when you’re a Googler, donate the sites linked above a boost on the SERPs ;)

Micro-meme started by John Honeck, supported by Richard Hearne, Bert Vierstra



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Why eBay and Wikipedia rule Google’s SERPs

It’s hard to find an obscure search query like [artificial link] which doesn’t deliver eBay spam or a Wikipedia stub within the first few results at Google. Although both Wikipedia and eBay are large sites, the Web is huge, so two that different sites shouldn’t dominate the SERPs for that many topics. Hence it’s safe to say that many nicely ranked search results at Googledia, pulled from eBaydia, are plain artificial positioned non-results.

Curious why my beloved search engine fails so badly, I borrowed a Google-savvy spy from GHN and sent him to Mountain View to uncover the eBaydia ranking secrets. He came back with lots of pay-dirt scraped from DVDs in the safe of building 43. Before I sold Google’s ranking algo to Ask (the price Yahoo! and MSN offered was laughable), I figured out why Googledia prefers eBaydia from comments in the source code. Here is the unbelievable story of a miserable failure:

When Yahoo! launched Mindset, Larry Page and Sergey Brin threw chairs out of anger because Google wasn’t able to accomplish such a simple task. The engineers, eager to fulfill their founder’s wishes asap, tried to integrate mindset-functionality without changing Google’s fascinating simple search interface (that means without a shopping/research slider). Personalized search still lived in the labs, but provided a somewhat suitable API (mega beta): scanSearchersBrainForContext([search query]). Not knowing that this function of personalized search polls a nano-bugging-device (pre alpha) which Google had not yet released nor implemented into any searcher’s brain at this time, they made use of that piece of experimental code to evaluate the search query’s context. Since the method always returned “false”, though they had to deliver results quickly, they made up some return values to test their algo tweaks:

/* debug - praying S&L don't throw more chairs */
if (scanSearchersBrainForContext($searchQuery) === false) then {
$contextShopping = “%ebay%”;
$contextResearch = “%wikipedia%”;
$context = both($contextShopping, $contextResearch);
}
else {[pretty complex algo])

This worked fine and found its way into the ranking algo under time pressure. The result is that with each and every search query where a page from eBay and/or Wikipedia is in the raw result set, those get a ranking boost. Sergey was happy because eBay is generally listed on page #1, and Larry likes the Wikipedia results on the first SERP. Tell me why the heck should the engineers comment out these made up return values? No engineer on this planet likes flying chairs, especially not in his office.

PS: Some SEOs push Wikipedia stubs too.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Who is responsible for the paid link mess?

Look at this graph showing the number of [buy link] searches since 2004:

Interestingly this search term starts out in September or October 2004, and shows a quite stable trend until the recent paid links debate started.

Who or what caused SEOs to massively buy links since 2004?

  • The Playboy interview with Google cofounders Larry Page and Sergey Brin just before Google was about to go public?
  • Google’s IPO?
  • Rumors that Google ran out of index space and therefore might restrict the number of doorway pages in the search index?
  • Nick Wilson preparing the launch of Threadwatch?
  • AdWords and Overture no longer running gambling ads?
  • The Internet Advancement scandal?
  • Google’s shortage of beer at the SES Google dance?
  • A couple UK based SEOs invented bought organic rankings?

Seriously, buying links for rankings was an established practice way before 2004. If you know the answer, or if you’ve a somewhat plausible theory, leave it in the comments. I’m really curious. Thanks.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google assists SERP Click-Through Optimization

Big Mama Google in her ongoing campaign to keep her search index clean assists Webmasters with reports allowing click-trough optimization of a dozen or so pages per Web site. Google launched these reports a while ago, but most Webmasters didn’t make the best use of them. Now that Vanessa has revealed her SEO secrets, lets discuss why and how Google helps increasing, improving, and targeting search engine traffic.

Google is not interested in gazillions of pages which rank high for (obscure) search terms but don’t get clicked from the SERPs. This clutter tortures the crawler and indexer, and it wastes expensive resources the query engine could use to deliver better results to the searchers.

Unfortunately, legions of clueless SEOs work hard to increase mount clutter by providing their clients with weekly ranking reports, what leads to even more pages which rank for (potentially money making) search phrases but appear on the SERPs with such crappy titles and snippets that not even a searcher coming with an IQ slightly below a slice of bread clicks them.

High rankings don’t pay the bills, converting traffic from SERPs on the other hand does. A nicely ranking page is an asset, which in most cases just needs a few minor tweaks to attract search engine users (mount clutter contains machine generated cookie-cutter pages too, but that’s a completely other story).

For example unattended pages gaining their SERP position from anchor text of links pointing to them often have a crappy click through rate (CTR). Say you’ve a page about a particular aspect of green widgets, which applies to widgets of all colors. For some reason folks preferring red widgets like your piece and link to it with “red widgets” as anchor text. The page will rank fine for [red widgets], but since “red widgets” is not mentioned on the page this keyword phrase doesn’t appear on the SERP’s snippets, not to speak of the linked title. Search engine users seeking for information on red widgets don’t click the link about green widgets, although it might be the best matching search result.

So here is the click-thru optimization process based on Google’s query stats (it doesn’t work with brand new sites nor more or less unindexed sites, because the data provided in Google’s Webmaster Tools are available, reliable and quite accurate for somewhat established sites only):

Login, choose a site and go to query stats. In an ideal world you’ll see two tables of rather identical keyword lists (all examples made up).

Top search queries Avg.
Pos.
Top SERP clicks Avg.
Pos.
1. web site design 5 1. web site design 4
2. google consulting 4 2. seo consulting 5
3. seo consulting 3 3. google consulting 2
4. web site structures 2 4. internal links 3
5. internal linkage 1 5. web site structure 3
6. crawlability 3 6. crawlability 5

The “Top search queries” table on the left shows positions for search phrases on the SERPs, regardless whether these pages got clicks or not. The “Top search query clicks” table on the right shows which search terms got clicked most, and where the landing pages were positioned on their SERPs. If good keywords appear in the left table but not in the right one, you’ve CTR optimization potentials.

The “average top position” might differ from todays SERPs, and it might differ for particular keywords even if those appear in the same line in both tables. Positioning fluctuation depends on a couple of factors. First, the position is recorded at the run time of each search query during the last 7 days, and within seven days a page can jump up and down on the SERPs. Second, positioning on for example UK SERPs can differ from US SERPs, so an average 3rd position may be a utterly useless value, when a page ranks #1 in the UK and gets a fair amount of traffic from UK SERPs, but ranks #8 on US SERPs and searchers don’t click it because the page is about a local event near Loch Nowhere in the highlands. Hence refine the reports by selecting your target markets in “location”, and if necessary “search type” too. Third, if these stats are generated based on very few searches and even fewer click throughs, they are totally and utterly useless for optimization purposes.

Lets say you’ve got a site with a fair amount of Google search engine traffic, the next step is identifying the landing pages involved (you get only 20 search queries, so the report covers only a fraction of your site’s pages). Pull these data from your referrer stats, or extract SERP referrers from your logs to create a crosstab of search terms from Google’s reports per landing page. Although the click data are from Google’s SERPs, it might make sense to do this job with a broader scope, that is including referrers from all major search engines.

Now perform the searches for your 20 keyword phrases (just click on the keywords on the report) to check how your pages look at the SERPs. If particular landing pages trigger search results for more than one search term, extract them all. Then load your landing page, and view its source. Read your page first rendered in your browser, then check out semantic hints in the source code, for example ALT or TITLE text and stuff like that. Look at the anchor text of incoming links (you can use link stats and anchor text stats from Google, We Build Pages Tools, …) and other ranking factors to understand why Google thinks this page is a good match for the search term. For each page, let the information sink before you change anything.

If the page is not exactly a traffic generator for other targeted keywords, you can optimize it with regard to a better CTR for the keyword(s) it ranks for. Basically that means use the keyword(s) naturally on all page areas where it makes sense, and provide each occurence with a context which hopefully makes it into the SERP’s snippet.

Make up a few natural sentences a searcher might have in mind when searching for your keyword(s). Write them down. Order them by their ability to fit the current page text in a natural way. Bear in mind that with personalized search Google could have scanned the searcher’s brain to add different contexts to the search query, so don’t concentrate too much on the keyword phrase alone, but on short sentences containing both the keyword(s), respectively their synonyms, and a sensible context as well.

There is no magic number like “use the keywords 5 times to get a #3 spot” or “7 occurences of a keyword gain you a #1 ranking”. Optimal keyword density is a myth, so just apply common sense by not annoying human readers. One readable sentence containing the keyword(s) might suffice. Also, emphasizing keywords (EM/I, STRONG/B, eye catching colors …) makes sense because it helps catching the attention of scanning visitors, but don’t over-emphasize because that looks crappy. The same goes for H2/H3/… headings. Structure your copy, but don’t write in headlines. When you emphasize a word or phrase in (bold) red, then don’t do that consistently but only in the most important sentence(s) of your page, and better only on the first visible screen of a longer page.

Work in your keyword+context laden sentences, but -again!- do it in a natural way. You’re writing for humans, not for algos which at this point already know what your page is all about and rank it properly. If your fine tuning gains you a better ranking that’s fine, but the goal is catching the attention of searchers reading (in most cases just skimming) your page title and a machine generated snippet on a search result page. Convince the algo to use your inserted sentence(s) in the snippet, not keyword lists from navigation elements or so.

Write a sensible summary of the page’s content, not more than 200-250 characters, and put that into the description meta tag. Do not copy the first paragraph or other text from the page. Write the summary from scratch instead, and mention the targeted keyword(s). The first paragraph on the page can exceed the length of the meta description to deliver an overview of the page’s message, and it should provide the same information, preferably in the first sentence, but don’t make it longish.

Check the TITLE tag in HEAD: when it is truncated on the SERP then shorten it so that the keyword becomes visible, perhaps move the keyword(s) to the beginning, or create a neat page title around the keyword(s). Do title changes very carefully, because the title is an important ranking factor and your changes could result in a ranking drop. Some CMSs change the URL without notice on changes of the title text, and you certainly don’t want to touch the URL at this point.

Make sure that the page title appears on the page too. Putting the TITLE tag’s content (or a slight variation) in a H1 element in BODY cannot hurt. If you for some weird reasons don’t use H-elements, then at least format it prominently (bold, different color but not red, bigger font size …).

If the page performs nice with a couple money terms and just has a crappy CTR for a particular keyword it ranks for, you can just add a link pointing to a (new) page optimized for that keyword(s), with the keyword(s) in the anchor text, preferably embedded in a readable sentence within the content (long enough to fill two lines under the linked title on the SERP), to improve the snippet. Adding a (prominent) link to a related topic should not impact rankings for other keywords too much, but the keywords submitted by searchers should appear in the snippet a short while after the next crawl. In such cases better don’t change the title, at least not now. If the page gained its ranking solely from anchor text of inbound links, putting the search term on the page can give it a nice boost.

Make sure you get an alert when Ms. Googlebot fetches the changed pages, and check out the SERPs and Google’s click stats a few days later. After a while you’ll get a pretty good idea of how Google creates snippets, and which snippets perform best on the SERPs. Repeat until success.

Related posts:
Google Quality Scores for Natural Search Optimization by Chris Silver Smith
Improve SERP-snippets by providing a good meta description tag by Raj Krishnan from Google’s Snippets Team



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Playing with Google Translate (still beta)

I use translation tools quite often, so after reading Google’s Udi Manber - Search is a Hard Problem I just had to look at Google Translate again.

Under Text and Web it offers the somewhat rough translations available from the toolbar and links on SERPs. Usually, I use that feature only with languages I don’t speak to get an idea of the rough meaning, because the offered translation is, well, rough. Here’s an example. Translating “Don’t make a fool of yourself” to German gives “einen Dummkopf nicht von selbst bilden”. That means “not forming a dullard of its own volition” but Google’s reverse translation “a fool automatically do not educate” is even funnier.

Coming with at least rudimentary practices in foreign languages really helps reading Google’s automated translations. Quite often the translation is just not understandable without knowledge of the other language’s grammar and distinctiveness. For example my french is a bit rusty, so translating Le Monde to english leads to understandable text I can read way faster than the original. Italian to English is another story (my italian skills should be considered “just enough for tourists”), for example the frontpage of la Repubblica is, partly due to the summarizing language, hard to read in Google’s english translation. Translated articles on the other hand are rather understandable.

By the way, the quality of translated news, technical writings or academic papers is much better than rough translations of everyday language, so better don’t try to get any sense out of translated forum posts and stuff like that. Probably that’s caused by the lack of trusted translations of these sources which are necessary to train Google’s algos.

Google Translate fails miserably sometimes. Although arabic-english is labelled “BETA”, it cannot translate even a single word from the most important source of news in arabic, Al Jazeera - it just delivers a copy of the arabic home page. Ok, that’s a joke, all the arabic text is provided on images. Translations of Al Jazeera’s articles are terrific, way better than any automated translation from or to european languages I’ve seen, ever. Comparing Google’s translation of the Beijing Review to the english edition makes no sense due to sync issues, but the automated translation looks great, even the headlines make sense (semantically, not in their meanings - but what do I know, I’m not a stalinistic commie killing and jailing dissidents practicing human rights like the freedom of speech).

On the second tab Google translates search results, that’s a neat way to research resources in other languages. You can submit a question in english, Google translates it on the fly to the other language, queries the search index with the translated search term and delivers a bilingual search result page, english in the left column and the foreign language on the right side. I don’t like that the page titles are truncated, also the snippets are way too short to make sense in most cases. However, it is darn useful. Let’s test how Google translates her own pamphlets:

A search in english for [Google Webmaster guidelines] on german pages delivers understandable results. The second search result, “Der Ankauf von Links mit der Absicht, die Rangfolge einer Website zu verbessern, ist ein Verstoß gegen die Richtlinien für Webmaster von Google”, gets translated to “The purchase from left with the intention of improving the order of rank of a Website is an offence against the guidelines for Web master of Google”. Here it comes straight from the horse’s mouth: Google’s very own Webmasters must not sell links on the left sidebar of pages on Google.com. I’m not a Webmaster at Google, so in my book that means I can remove the crappy nofollow from tons of links as long as I move them to the left sidebar. (Seriously, the german noun for “link” is “Verbindung” respectively “Verweis”, which both have tons of other meanings besides “hyperlink”, so everybody in Germany uses “Link” and the plural “Links”, but “links” means “left” and Google’s translator ignores capitalization as well as anglicisms. The german translation of “Google’s guidelines for Webmasters” as “Richtlinien für Webmaster von Google” is quite hapless by the way. It should read “Googles Richtlinien für Webmaster” because “Webmaster von Google” really means “Webmasters of Google” which is (in German) a synonym for “Google’s [own] Webmasters”.)

An extended search like [Google quality guidelines hidden links] for all sorts of terms from the guidelines like “hidden text”, “cloaking”, “doorway page” (BTW why is the page type described as “doorway page” in reality a “hallway page”, and why doesn’t explain Google the characteristics of deceitfully doorway pages, and why doesn’t Google explain that most (not machine generated) doorway pages are perfectly legit landing pages?), “sneaky redirects” and many more did not deliver a single page from google.de on the first SERP. No wonder that german Internet marketers are the worst spammers on earth when Google doesn’t tell them what particular techniques they should avoid. Hint for Riona: to improve findability consider adding these tags untranslated to all versions of the help system in foreign languages. Hint for Matt: please admit that not each and every doorway page is violating Google’s guidelines. A well done and compelling doorway page just highlights a particular topic, hence from a Webmaster’s as well as from a search engine’s perspective that’s perfectly legit “relevance bait” (I can resist to call it spider fodder because it really ain’t that in particular).

Ok, back to the topic.

I really fell in love with the recently added third tab Dictionary. This tool beats the pants off Babylon and other word translators when it comes to lookups of single words, but it lacks the reverse functionality provided by these tools, that is the translations of phrases. And it’s Web based, so (for example) a middle mouse click on a word or phrase in any application except of my Web browser with Google’s toolbar enabled doesn’t show the translation. Actually, the quality of one-word lookups is terrific, and when you know how to search you get phrases too. Just play and get familar with it, then when you’ve at least a rudimentary understanding of the other language you’ll often get the desired results.

Well, not always. Submitting “schlagen” (”beat”) in German-English mode when I search for a phrase like “beats the pants off something” leads to “outmatch” (”übertreffen, (aus dem Felde) schlagen”) as best match. In reverse (English-German) “outmatch” is translated to “übertreffen, (aus dem Felde) schlagen” without alternative or supplemental results, but “beat” has tons of german results, unfortunately without “beats the pants off something”.

I admit that’s unfair, according to the specs the dictionary thingy is not able to translate phrases (yet). The one-word translations are awesome, I just couldn’t resist to max it out with my tries to translate phrases. Hopefully Google renames “Dictionary” to “Words” and adds a tab “Phrases” soon.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Erol ships patch fixing deindexing of online stores by Google

If you run an Erol driven store and you suffer from a loss of Google traffic, or you just want to make sure that your store’s content presentation is more compliant to Google’s guidelines, then patch your Erol software (*ix hosts / Apache only). For a history of this patch and more information click here.

Tip: Save your /.htaccess file before you publish the store. If it contains statements not related to Erol, then add the code shipped with this patch manually to your local copy of .htaccess and the .htaccess file in the Web host’s root directory. If you can’t see the (new) .htaccess file in your FTP client, then add “-a” to the external file mask. If your FTP client transfers .htaccess in binary mode, then add “.htaccess” to the list of ASCII files in the settings. If you upload .htaccess in binary mode, it may not exactly do what you expect it to accomplish.

I don’t know when/if Erol will ship a patch for IIS. (As a side note, I can’t imagine one single reason why hosting an online store under Windows could make sense. OTOH there are many reasons to avoid hosting of anything keen on search engine traffic on a Windows box.)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Which Sebastian Foss is a spammer?

Obviously pissed by my post Fraud from the desk of Sebastian Foss, Sebastian Foss sent this email to Smart-IT-Consulting.com:

Remove your insults from your blog about my products and sites… as you may know promote-biz.net is not registered to my name or my company.. just look it up in some whois service. This is some spammer who took my software and is now selling it on his spammer websites. Im only selling my programs under their original .com domains and you did not receive any email from me since im only using doube-optin lists.

You may not know it - but insulting persons and spreading lies is under penalty.

Sebastian Foss
Sebastian Foss e-trinity Marketing Inc.
sebastian@etrinity-mail.com

Well, that’s my personal blog, and I’ve a professional opinion about the software Sebastian Foss sells, more on that later. It’s public knowledge that spammers do register domains under several entities to obfuscate their activities. I’m not a fed, and I’m not willing to track down each and every multiple respectively virtual personality of a spammer, so I admit that there’s at least a slight possibility that the Sebastian Foss spamming my inbox from promote-biz.net is not the Sebastian Foss who wrote and sells the software promoted by the email spammer Sebastian Foss. Since I still receive email spam from the desk of Sebastian Foss at promote-biz.net, I think there’s no doubt that this Sebastian Foss is a spammer. Well, Sebastian Foss himself calls him a spammer, and so do I. Confused? So am I. I’ll update my other post to reflect that.

Now that we’ve covered the legal stuff, lets look at the software from the desk of Sebastian Foss.

  • Blog Blaster claims to submit “ads” to 2,000,000 sites. Translation: Blog Blaster automatically submits promotional comments to 2 million blogs. The common description of this kind of “advertising” is comment spam.
    Sebastian Foss tells us that “Blog Blaster will automatically create thousands of links to your website - which will rank your website in a top 10 position!”. The common description of this link building technique is link spam.
    The sales pitch signed by Sebastian Foss explains “I used it [Blog Blaster] to promote my other website called ezinebroadcast.com and Blog Blaster produced thousands of links to ezinebroadcast.com - resulting in a #1 position in Google for the term “ezine advertising service”. So I understand that Sebastian Foss admits that he is a comment spammer and a link spammer.
    I’d like to see the written permissions of 2,000,000 bloggers allowing Sebastian Foss and his customers to spam their blogs: “Advertising using Blog Blaster is 100% SPAM FREE advertising! You will never be accused of spamming. Your ads are submitted to blogs whose owners have agreed to receive your ads.” Laughable, and obviously a lie. Did Sebastian Foss remember that “spreading lies is under penalty”? Take care, Sebastian Foss!
  • Feed Blaster with a very similar sales pitch aims to create the term feed spam. Also, it seems that FeedBlaster™ is a registered trademark of DigitalGrit Inc. And I don’t think that Microsoft, Sun and IBM are happy to spot their logos on Sebastian Foss’ site e-trinity Internetmarketing GmbH
  • The Money License System aka Google Cash Machine seems to slip through a legal loophole. May be it’s not explicit illegal to sell software build to to trick Google Adwords respectively AdSense or ClickBank, but using it will result in account terminations and AFAIK legal actions too.
  • Instant Booster claims to spam search engines, and it does, according to many reports. The common term applied to those techniques is web spam.

All these domains (and there are countless more sites selling similar scams from the desk of Sebastian Foss) are registered by Sebastian Foss respectively his companies e-trinity Internetmarketing GmbH or e-trinity Marketing Inc.

He’s in the business of newsgroup spam, search engine spam, comment spam … probably there’s no target left out. Searching for Sebastian Foss scam and similar search terms leads to tons of rip-off reports.

He’s even too lazy to rephrase his sales pitches, click a few of the links provided above, then search for quoted phrases you saw on every sales pitch to get the big picture. All that may be legal in Germany, I couldn’t care less, but it’s not legit. Creating and selling software for the sole purpose of spamming makes the software vendor a spammer. And he’s proud of it. He openly admits that he uses his software to spam blogs, search engines, newsgroups and whatever. He may make use of affiliates and virtual entities who send out the email spam, perhaps he got screwed by a chinese copycat selling his software via email spam, but is that relevant when the product itself is spammy?

What do you think, is every instance of Sebastian Foss a spammer? Feel free to vote in the comments.

Update 08/01/2007 Here is the next email from the desk of Sebastian Foss:

Hi,
thanks for the changes on your blog entry - however like i mentioned if you look up the domains which were advertised in the spam mails you will notice that they are not registered to me or my company. You can also see that visiting the sites you will see some guy took my products and is selling them for a lower price on his own websites where he is also copying all of my graphic files. The german police told me that they are receiving spam from your forms and that it goes directly to their trash… however please remove your entries about me from your blog - There is no sense in me selling my own products for a lower price on some cheap, stolen websites - if that would make sense then why do i have my own .com domains for my products ? I just want to make clear that im not sending out any spam mails - please get back to me.

Thanks,
Sebastian

Sebastian Foss
e-trinity Internetmarketing GmbH
sebastian@etrinity-mail.com

It deserves just a short reply:

It makes perfect sense to have an offshore clone in China selling the same outdated and pretty much questionable stuff a little cheaper. This clone can do that because first there’s next to no costs like taxes and so on, and second he does it per spamming my inbox on a daily base, hence probably he sells a lot of the ‘borrowed’ stuff. Whether or not the multiple Sebastian Fosses are the same natural person is not my problem. I claim nothing but leave it up to you dear reader’s speculation, common sense, and probability calculation.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Blogger abuses rel-nofollow due to ignorance

I had planned a full upgrade of this blog to the newest blogger version this weekend. The one and only reason to do the upgrade was the idea that I perhaps could disable the auto-nofollow functionality in the comments. Well, what I found was a way to dofollow the author’s link by editing the <dl id='comments-block'> block, but I couldn’t figure out how to disable the auto-nofollow in embedded links.

Considering the hassles of converting all the template hacks into the new format, and the risk of most probably losing the ability to edit code my way, I decided to stick with the old template. It just makes no sense for me to dofollow the author’s link, when a comment author’s links within the content get nofollow’ed automatically. Andy Beard and others will hate me now, so let me explain why I don’t move this blog to my own domain using a not that insane software like WordPress.

  • I own respectively author on various WordPress blogs. Google’s time to index for posts and updates from this blogspot thingy is 2-3 hours (Web search, not blog search). My Wordpress blogs, even with higher PageRank, suffer from a way longer time to index.
  • I can’t afford the time to convert and redirect 150 posts to another blog.
  • I hope that Google/Blogger can implement reasonable change requests (most probably that’s just wishful thinking).

That said, WordPress is a way better software than Blogger. I’ll have to move this blog if Blogger is not able to fulfill at least my basic needs. I’ll explain below why I think that Blogger lacks any understanding of the rel-nofollow semantics. In fact, they throw nofollow crap on everything they get a hand on. It seems to me that they won’t stop jeopardizing the integrity of the Blogosphere (at least where they control the linkage) until they get bashed really hard by a Googler who understands what rel-nofollow is all about. I nominate Matt Cutts, who invented and evolved it, and who does not tolerate BS.

So here is my wishlist. I want (regardless of the template type!)

  • A checkbox “apply rel=nofollow to comment author links”
  • A checkbox “apply rel=nofollow to links within comment text”
  • To edit comments, for example to nofollow links myself, or to remove offensive language
  • A checkbox “apply rel=nofollow to links to label/search pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to label/search pages”
  • A checkbox “apply rel=nofollow to links to archive pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to archive pages”
  • A checkbox “apply rel=nofollow to backlink listings”

As for the comments functionality, I’d understand when these options get disabled when comment moderation is set to off.

And here are the nofollow-bullshit examples.

  • When comment moderation and captchas are activated, why are comment author links as well as links within the comments nofollow’ed? Does blogger think their bloggers are minor retards? I mean, when I approve a comment, then I do vouch for it. But wait! I can’t edit the comment, so a low-life link might slip through. Ok, then let me edit the comments.
  • When I’ve submitted a comment, the link to the post is nofollowed. Nofollow insane II.This page belongs to the blog, so why the fudge does Blogger nofollow navigational links? And if it makes sense for a weird reason not understandable by a simple webmaster like me, why is the link to the blog’s main page as well as the link to the post one line below not nofollow’ed? Linking to the same URL with and without rel-nofollow on the same page deserves a bullshit award.
  • Nofollow insane III. (dashboard)On my dashbord Blogger features a few blogs as “Blogs Of Note”, all links nofollow’ed. These are blogs recommended by the Blogger crew. That means they have reviewed them and the links are clearly editorial content. They’re proud of it: “we’ve done a pretty good job of publishing a new one each day”. Blogger’s very own Blogs Of Note blog does not nofollow the links, and that’s correct.

    So why the heck are these recommended blogs nofollow’ed on the dashboard? Nofollow insane III. (blogspot)

  • Blogger inserted robots meta tags “nofollow,noindex” on each and every blog hosted outside the controlled blogspot.com domain earlier this year.
  • Blogger inserted robots meta tags “nofollow,noindex” on Google blogs a few days ago.

If Blogger’s recommendation “Check google.com. (Also good for searching.)” is a honest one, why don’t they invest a few minutes to educate themselves on rel-nofollow? I mean, it’s a Google-block/avoid-indexing/ranking-thingy they use to prevent Google.com users from finding valuable contents hosted on their own domains. And they annoy me. And they insult their users. They shouldn’t do that. That’s not smart. That’s not Google-ish.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13  Next Page »