Letting friends know you read their stuff

With various social tools and gadgets there are tons of opportunities to publically or privately show that you follow your friends. I can digg my friends’ articles, or bookmark them at delicious, I can link to their posts via sharing in Google Reader, or after reading their posts in my preferred feed reader, I can click the link too just to push my red crab image to the top of their MBL and BUMPzee widgets.

All that comes with common hassles. I want to use these social gadgets and services without jumps thru unintended hoops, that is I consider all the above mentioned methods to tell friends that I still love them diverting those services from their intended use. Also, not every friend of mine makes use of all these geeky tools, so I need to digg posts of A., to delicious articles by B., to share posts of C., and to visit the blogs of D., E. and F. just to show that I’ve read their stuff in my feed reader.

I can’t do that, at least not in a reliable manner, especially not when I’m swamped and just try to catch up after 12 or more hours of dealing with legacy applications or other painful tasks like meetings with wannabe-geeks (unexperienced controllers or chiefs of whichever-useless-service-center) respectively anti-geeks (know-it-all but utterly-clueless and dangerous-to-the-company’s-safety IT managers). Doh!

So when I’m not able to send my friends a twitter-great-job-message or IM, and don’t have the time to link to their stuff, should I feel bad? Probably. Penalties are well deserved. Actually, the consequence is that nice guys like Nick Wilson @Metaversed unfriend me (among other well-meaning followers) at Twitter coz “I didn’t provide useful input for a while”, not knowing that I follow them with interest, read their posts and all that, but just can’t contribute at the moment because their actual field of interest doesn’t match my time schedule respectively my todays-hot-topic-list, nor my current centre of gravity, so to say. That does not mean I’m not interested in whatever they do and output, I just can’t process it ATM but I know that’ll change at some point in the future. Hey, geeks usually hop from today’s hot thing to tomorrow’s hot thing, and flashbacks are rather natural, so why expect continuousness?

Bugger, I wrote four paragraphs and didn’t come to the point expectable from the post’s title. And I bored you dear readers with lots of title bait recently. Sorry, but I did enjoy it. Ok, here’s the message:

Everybody monitors referrer stats. Don’t say you don’t do it because that’s first a lie and second a natural thing to do. That applies to ego searches too by the way. So why don’t we make use of referrer spoofing to send a signal to our friends? It’s that easy. Just add the referrer-spoofing widget to your PrefBar, enter your URL, and surf on. Well, technically that’s referrer spamming, so if you wear a tinfoil hat use a non-indexable server like example.com. I’m currently surfing with the HTTP_REFERER “http://www.example.com/gofuckyourself” but I’m going to change that to this blog’s URL. Funny folks visiting my blog provide bogus referrers like “http://spamteam.google.com/” and “http://corp.google.com:8080/webspam/watchlist.py”, so why the fuck shouldn’t I use my actual address? This will tell my friends that I still love them. And real geeks shouldn’t expect unforged referrer stats, since many nice guys surf without spamming the server logs with a referrer.

What do you think?



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google assists SERP Click-Through Optimization

Big Mama Google in her ongoing campaign to keep her search index clean assists Webmasters with reports allowing click-trough optimization of a dozen or so pages per Web site. Google launched these reports a while ago, but most Webmasters didn’t make the best use of them. Now that Vanessa has revealed her SEO secrets, lets discuss why and how Google helps increasing, improving, and targeting search engine traffic.

Google is not interested in gazillions of pages which rank high for (obscure) search terms but don’t get clicked from the SERPs. This clutter tortures the crawler and indexer, and it wastes expensive resources the query engine could use to deliver better results to the searchers.

Unfortunately, legions of clueless SEOs work hard to increase mount clutter by providing their clients with weekly ranking reports, what leads to even more pages which rank for (potentially money making) search phrases but appear on the SERPs with such crappy titles and snippets that not even a searcher coming with an IQ slightly below a slice of bread clicks them.

High rankings don’t pay the bills, converting traffic from SERPs on the other hand does. A nicely ranking page is an asset, which in most cases just needs a few minor tweaks to attract search engine users (mount clutter contains machine generated cookie-cutter pages too, but that’s a completely other story).

For example unattended pages gaining their SERP position from anchor text of links pointing to them often have a crappy click through rate (CTR). Say you’ve a page about a particular aspect of green widgets, which applies to widgets of all colors. For some reason folks preferring red widgets like your piece and link to it with “red widgets” as anchor text. The page will rank fine for [red widgets], but since “red widgets” is not mentioned on the page this keyword phrase doesn’t appear on the SERP’s snippets, not to speak of the linked title. Search engine users seeking for information on red widgets don’t click the link about green widgets, although it might be the best matching search result.

So here is the click-thru optimization process based on Google’s query stats (it doesn’t work with brand new sites nor more or less unindexed sites, because the data provided in Google’s Webmaster Tools are available, reliable and quite accurate for somewhat established sites only):

Login, choose a site and go to query stats. In an ideal world you’ll see two tables of rather identical keyword lists (all examples made up).

Top search queries Avg.
Pos.
Top SERP clicks Avg.
Pos.
1. web site design 5 1. web site design 4
2. google consulting 4 2. seo consulting 5
3. seo consulting 3 3. google consulting 2
4. web site structures 2 4. internal links 3
5. internal linkage 1 5. web site structure 3
6. crawlability 3 6. crawlability 5

The “Top search queries” table on the left shows positions for search phrases on the SERPs, regardless whether these pages got clicks or not. The “Top search query clicks” table on the right shows which search terms got clicked most, and where the landing pages were positioned on their SERPs. If good keywords appear in the left table but not in the right one, you’ve CTR optimization potentials.

The “average top position” might differ from todays SERPs, and it might differ for particular keywords even if those appear in the same line in both tables. Positioning fluctuation depends on a couple of factors. First, the position is recorded at the run time of each search query during the last 7 days, and within seven days a page can jump up and down on the SERPs. Second, positioning on for example UK SERPs can differ from US SERPs, so an average 3rd position may be a utterly useless value, when a page ranks #1 in the UK and gets a fair amount of traffic from UK SERPs, but ranks #8 on US SERPs and searchers don’t click it because the page is about a local event near Loch Nowhere in the highlands. Hence refine the reports by selecting your target markets in “location”, and if necessary “search type” too. Third, if these stats are generated based on very few searches and even fewer click throughs, they are totally and utterly useless for optimization purposes.

Lets say you’ve got a site with a fair amount of Google search engine traffic, the next step is identifying the landing pages involved (you get only 20 search queries, so the report covers only a fraction of your site’s pages). Pull these data from your referrer stats, or extract SERP referrers from your logs to create a crosstab of search terms from Google’s reports per landing page. Although the click data are from Google’s SERPs, it might make sense to do this job with a broader scope, that is including referrers from all major search engines.

Now perform the searches for your 20 keyword phrases (just click on the keywords on the report) to check how your pages look at the SERPs. If particular landing pages trigger search results for more than one search term, extract them all. Then load your landing page, and view its source. Read your page first rendered in your browser, then check out semantic hints in the source code, for example ALT or TITLE text and stuff like that. Look at the anchor text of incoming links (you can use link stats and anchor text stats from Google, We Build Pages Tools, …) and other ranking factors to understand why Google thinks this page is a good match for the search term. For each page, let the information sink before you change anything.

If the page is not exactly a traffic generator for other targeted keywords, you can optimize it with regard to a better CTR for the keyword(s) it ranks for. Basically that means use the keyword(s) naturally on all page areas where it makes sense, and provide each occurence with a context which hopefully makes it into the SERP’s snippet.

Make up a few natural sentences a searcher might have in mind when searching for your keyword(s). Write them down. Order them by their ability to fit the current page text in a natural way. Bear in mind that with personalized search Google could have scanned the searcher’s brain to add different contexts to the search query, so don’t concentrate too much on the keyword phrase alone, but on short sentences containing both the keyword(s), respectively their synonyms, and a sensible context as well.

There is no magic number like “use the keywords 5 times to get a #3 spot” or “7 occurences of a keyword gain you a #1 ranking”. Optimal keyword density is a myth, so just apply common sense by not annoying human readers. One readable sentence containing the keyword(s) might suffice. Also, emphasizing keywords (EM/I, STRONG/B, eye catching colors …) makes sense because it helps catching the attention of scanning visitors, but don’t over-emphasize because that looks crappy. The same goes for H2/H3/… headings. Structure your copy, but don’t write in headlines. When you emphasize a word or phrase in (bold) red, then don’t do that consistently but only in the most important sentence(s) of your page, and better only on the first visible screen of a longer page.

Work in your keyword+context laden sentences, but -again!- do it in a natural way. You’re writing for humans, not for algos which at this point already know what your page is all about and rank it properly. If your fine tuning gains you a better ranking that’s fine, but the goal is catching the attention of searchers reading (in most cases just skimming) your page title and a machine generated snippet on a search result page. Convince the algo to use your inserted sentence(s) in the snippet, not keyword lists from navigation elements or so.

Write a sensible summary of the page’s content, not more than 200-250 characters, and put that into the description meta tag. Do not copy the first paragraph or other text from the page. Write the summary from scratch instead, and mention the targeted keyword(s). The first paragraph on the page can exceed the length of the meta description to deliver an overview of the page’s message, and it should provide the same information, preferably in the first sentence, but don’t make it longish.

Check the TITLE tag in HEAD: when it is truncated on the SERP then shorten it so that the keyword becomes visible, perhaps move the keyword(s) to the beginning, or create a neat page title around the keyword(s). Do title changes very carefully, because the title is an important ranking factor and your changes could result in a ranking drop. Some CMSs change the URL without notice on changes of the title text, and you certainly don’t want to touch the URL at this point.

Make sure that the page title appears on the page too. Putting the TITLE tag’s content (or a slight variation) in a H1 element in BODY cannot hurt. If you for some weird reasons don’t use H-elements, then at least format it prominently (bold, different color but not red, bigger font size …).

If the page performs nice with a couple money terms and just has a crappy CTR for a particular keyword it ranks for, you can just add a link pointing to a (new) page optimized for that keyword(s), with the keyword(s) in the anchor text, preferably embedded in a readable sentence within the content (long enough to fill two lines under the linked title on the SERP), to improve the snippet. Adding a (prominent) link to a related topic should not impact rankings for other keywords too much, but the keywords submitted by searchers should appear in the snippet a short while after the next crawl. In such cases better don’t change the title, at least not now. If the page gained its ranking solely from anchor text of inbound links, putting the search term on the page can give it a nice boost.

Make sure you get an alert when Ms. Googlebot fetches the changed pages, and check out the SERPs and Google’s click stats a few days later. After a while you’ll get a pretty good idea of how Google creates snippets, and which snippets perform best on the SERPs. Repeat until success.

Related posts:
Google Quality Scores for Natural Search Optimization by Chris Silver Smith
Improve SERP-snippets by providing a good meta description tag by Raj Krishnan from Google’s Snippets Team



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Killing Trolls in Google Groups

Are you tired of trolls and dickheads in the Google Groups? Then switch to FireFox, install Greasemonkey and Damian’s Google Groups KillFile. Go to your favorite groups and click “ignore user” to populate your ignore list. Without trolling your Usenet or Google Group will look way friendlier.

You should read (and subscribe to) Damian’s troll-killer post and the comments though, just in case Google changes the layout again or there’s a bugfix. For example when I’ve the troll filter activated, I don’t see threads when a troll posted the last reply.

Remember: The canonical treatment of trolls is ignoring their posts, regardless a particular posts’s insanity or the lack of it. Not even insults, slander or name calling justifies a reply to the troll. If necessary, forward the post to your lawyers, but don’t enter a discussion with a troll because that feeds their ego and encourages them to produce even more crap.

Hat tip to ThoughtOn

Related post: What to do with [troll’s handle]? by JLH



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Playing with Google Translate (still beta)

I use translation tools quite often, so after reading Google’s Udi Manber - Search is a Hard Problem I just had to look at Google Translate again.

Under Text and Web it offers the somewhat rough translations available from the toolbar and links on SERPs. Usually, I use that feature only with languages I don’t speak to get an idea of the rough meaning, because the offered translation is, well, rough. Here’s an example. Translating “Don’t make a fool of yourself” to German gives “einen Dummkopf nicht von selbst bilden”. That means “not forming a dullard of its own volition” but Google’s reverse translation “a fool automatically do not educate” is even funnier.

Coming with at least rudimentary practices in foreign languages really helps reading Google’s automated translations. Quite often the translation is just not understandable without knowledge of the other language’s grammar and distinctiveness. For example my french is a bit rusty, so translating Le Monde to english leads to understandable text I can read way faster than the original. Italian to English is another story (my italian skills should be considered “just enough for tourists”), for example the frontpage of la Repubblica is, partly due to the summarizing language, hard to read in Google’s english translation. Translated articles on the other hand are rather understandable.

By the way, the quality of translated news, technical writings or academic papers is much better than rough translations of everyday language, so better don’t try to get any sense out of translated forum posts and stuff like that. Probably that’s caused by the lack of trusted translations of these sources which are necessary to train Google’s algos.

Google Translate fails miserably sometimes. Although arabic-english is labelled “BETA”, it cannot translate even a single word from the most important source of news in arabic, Al Jazeera - it just delivers a copy of the arabic home page. Ok, that’s a joke, all the arabic text is provided on images. Translations of Al Jazeera’s articles are terrific, way better than any automated translation from or to european languages I’ve seen, ever. Comparing Google’s translation of the Beijing Review to the english edition makes no sense due to sync issues, but the automated translation looks great, even the headlines make sense (semantically, not in their meanings - but what do I know, I’m not a stalinistic commie killing and jailing dissidents practicing human rights like the freedom of speech).

On the second tab Google translates search results, that’s a neat way to research resources in other languages. You can submit a question in english, Google translates it on the fly to the other language, queries the search index with the translated search term and delivers a bilingual search result page, english in the left column and the foreign language on the right side. I don’t like that the page titles are truncated, also the snippets are way too short to make sense in most cases. However, it is darn useful. Let’s test how Google translates her own pamphlets:

A search in english for [Google Webmaster guidelines] on german pages delivers understandable results. The second search result, “Der Ankauf von Links mit der Absicht, die Rangfolge einer Website zu verbessern, ist ein Verstoß gegen die Richtlinien für Webmaster von Google”, gets translated to “The purchase from left with the intention of improving the order of rank of a Website is an offence against the guidelines for Web master of Google”. Here it comes straight from the horse’s mouth: Google’s very own Webmasters must not sell links on the left sidebar of pages on Google.com. I’m not a Webmaster at Google, so in my book that means I can remove the crappy nofollow from tons of links as long as I move them to the left sidebar. (Seriously, the german noun for “link” is “Verbindung” respectively “Verweis”, which both have tons of other meanings besides “hyperlink”, so everybody in Germany uses “Link” and the plural “Links”, but “links” means “left” and Google’s translator ignores capitalization as well as anglicisms. The german translation of “Google’s guidelines for Webmasters” as “Richtlinien für Webmaster von Google” is quite hapless by the way. It should read “Googles Richtlinien für Webmaster” because “Webmaster von Google” really means “Webmasters of Google” which is (in German) a synonym for “Google’s [own] Webmasters”.)

An extended search like [Google quality guidelines hidden links] for all sorts of terms from the guidelines like “hidden text”, “cloaking”, “doorway page” (BTW why is the page type described as “doorway page” in reality a “hallway page”, and why doesn’t explain Google the characteristics of deceitfully doorway pages, and why doesn’t Google explain that most (not machine generated) doorway pages are perfectly legit landing pages?), “sneaky redirects” and many more did not deliver a single page from google.de on the first SERP. No wonder that german Internet marketers are the worst spammers on earth when Google doesn’t tell them what particular techniques they should avoid. Hint for Riona: to improve findability consider adding these tags untranslated to all versions of the help system in foreign languages. Hint for Matt: please admit that not each and every doorway page is violating Google’s guidelines. A well done and compelling doorway page just highlights a particular topic, hence from a Webmaster’s as well as from a search engine’s perspective that’s perfectly legit “relevance bait” (I can resist to call it spider fodder because it really ain’t that in particular).

Ok, back to the topic.

I really fell in love with the recently added third tab Dictionary. This tool beats the pants off Babylon and other word translators when it comes to lookups of single words, but it lacks the reverse functionality provided by these tools, that is the translations of phrases. And it’s Web based, so (for example) a middle mouse click on a word or phrase in any application except of my Web browser with Google’s toolbar enabled doesn’t show the translation. Actually, the quality of one-word lookups is terrific, and when you know how to search you get phrases too. Just play and get familar with it, then when you’ve at least a rudimentary understanding of the other language you’ll often get the desired results.

Well, not always. Submitting “schlagen” (”beat”) in German-English mode when I search for a phrase like “beats the pants off something” leads to “outmatch” (”übertreffen, (aus dem Felde) schlagen”) as best match. In reverse (English-German) “outmatch” is translated to “übertreffen, (aus dem Felde) schlagen” without alternative or supplemental results, but “beat” has tons of german results, unfortunately without “beats the pants off something”.

I admit that’s unfair, according to the specs the dictionary thingy is not able to translate phrases (yet). The one-word translations are awesome, I just couldn’t resist to max it out with my tries to translate phrases. Hopefully Google renames “Dictionary” to “Words” and adds a tab “Phrases” soon.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Referrer spoofing with PrefBar 3.4.1

Testing browser optimization, search engine friendly user-agent cloaking, referrer based navigation or dynamic landing pages with scripts or by changing the user agent name in the browser’s settings is no fun.

I love PrefBar, a neat FireFox plug-in, which provides me with a pretty useful customizable toolbar. With PrefBar you can switch JavaScript, Flash, colors, images, cookies… on and off with one mouse click, and you can enter a list of user agent names to choose the user agent while browsing.

So I’ve asked Manuel Reimer to create a referrer spoofer widget, and kindly he created it with PrefBar 3.4.1. Thank you Manuel!

To activate referrer spoofing in your PrefBar toolbar install or update Prefbar to 3.4.1, then download the Referer Spoof Menulist 1.0, click “Customize” on the toolbar and import the file. Then click on “Edit” to add all the referrer URLs you need for testing purposes, and enjoy. It works great.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Erol ships patch fixing deindexing of online stores by Google

If you run an Erol driven store and you suffer from a loss of Google traffic, or you just want to make sure that your store’s content presentation is more compliant to Google’s guidelines, then patch your Erol software (*ix hosts / Apache only). For a history of this patch and more information click here.

Tip: Save your /.htaccess file before you publish the store. If it contains statements not related to Erol, then add the code shipped with this patch manually to your local copy of .htaccess and the .htaccess file in the Web host’s root directory. If you can’t see the (new) .htaccess file in your FTP client, then add “-a” to the external file mask. If your FTP client transfers .htaccess in binary mode, then add “.htaccess” to the list of ASCII files in the settings. If you upload .htaccess in binary mode, it may not exactly do what you expect it to accomplish.

I don’t know when/if Erol will ship a patch for IIS. (As a side note, I can’t imagine one single reason why hosting an online store under Windows could make sense. OTOH there are many reasons to avoid hosting of anything keen on search engine traffic on a Windows box.)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

The Vanessa Fox Memorial

I was quite shocked when Vanessa told me that she’s leaving Google to join Zillow. That’s a big loss for Google, and a big loss for the Webmaster/SEO community relying on Google. And that’s a great enrichment for Zillow. I’m dead sure they can’t really imagine how lucky they are. And they better treat her very well, or Vanessa’s admirers will launch a firestorm which Rommel, Guderian, et al couldn’t have dreamed of when they’ve invented the blitz. Yep, at first sight that was sad news.

But it’s good news for Vanessa, she’s excited of “an all-new opportunity to work on the unique challenges of the vertical and local search space at Zillow”. I wish her all the best at Zillow and I hope that this challenge will not morph her into an always too tired caffeine junky (again) ;)

Back in 2005/2006 when I interviewed Vanessa on her pet sitemaps, her blogger profile said “technical writer in Kirkland” (from my POV an understatement), now she leaves Google as a prominent product manager, well known and loved by colleagues, SEOs and Webmasters around the globe. She created the Vanessa Fox Memorial aka “Google Webmaster Central” and handed her baby over to a great team she gathered and trained to make sure that Google’s opening to Webmasters evolves further. Regardless her unclimbable mount email Vanessa was always there to help, fix and clarify things, and open to suggestions even on minor details. She’s a gem, an admirable geek, a tough and lovably ideal of a Googler, and now a Zillower. Again, all the best, keep in touch, and

Thank You Vanessa!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Which Sebastian Foss is a spammer?

Obviously pissed by my post Fraud from the desk of Sebastian Foss, Sebastian Foss sent this email to Smart-IT-Consulting.com:

Remove your insults from your blog about my products and sites… as you may know promote-biz.net is not registered to my name or my company.. just look it up in some whois service. This is some spammer who took my software and is now selling it on his spammer websites. Im only selling my programs under their original .com domains and you did not receive any email from me since im only using doube-optin lists.

You may not know it - but insulting persons and spreading lies is under penalty.

Sebastian Foss
Sebastian Foss e-trinity Marketing Inc.
sebastian@etrinity-mail.com

Well, that’s my personal blog, and I’ve a professional opinion about the software Sebastian Foss sells, more on that later. It’s public knowledge that spammers do register domains under several entities to obfuscate their activities. I’m not a fed, and I’m not willing to track down each and every multiple respectively virtual personality of a spammer, so I admit that there’s at least a slight possibility that the Sebastian Foss spamming my inbox from promote-biz.net is not the Sebastian Foss who wrote and sells the software promoted by the email spammer Sebastian Foss. Since I still receive email spam from the desk of Sebastian Foss at promote-biz.net, I think there’s no doubt that this Sebastian Foss is a spammer. Well, Sebastian Foss himself calls him a spammer, and so do I. Confused? So am I. I’ll update my other post to reflect that.

Now that we’ve covered the legal stuff, lets look at the software from the desk of Sebastian Foss.

  • Blog Blaster claims to submit “ads” to 2,000,000 sites. Translation: Blog Blaster automatically submits promotional comments to 2 million blogs. The common description of this kind of “advertising” is comment spam.
    Sebastian Foss tells us that “Blog Blaster will automatically create thousands of links to your website - which will rank your website in a top 10 position!”. The common description of this link building technique is link spam.
    The sales pitch signed by Sebastian Foss explains “I used it [Blog Blaster] to promote my other website called ezinebroadcast.com and Blog Blaster produced thousands of links to ezinebroadcast.com - resulting in a #1 position in Google for the term “ezine advertising service”. So I understand that Sebastian Foss admits that he is a comment spammer and a link spammer.
    I’d like to see the written permissions of 2,000,000 bloggers allowing Sebastian Foss and his customers to spam their blogs: “Advertising using Blog Blaster is 100% SPAM FREE advertising! You will never be accused of spamming. Your ads are submitted to blogs whose owners have agreed to receive your ads.” Laughable, and obviously a lie. Did Sebastian Foss remember that “spreading lies is under penalty”? Take care, Sebastian Foss!
  • Feed Blaster with a very similar sales pitch aims to create the term feed spam. Also, it seems that FeedBlaster™ is a registered trademark of DigitalGrit Inc. And I don’t think that Microsoft, Sun and IBM are happy to spot their logos on Sebastian Foss’ site e-trinity Internetmarketing GmbH
  • The Money License System aka Google Cash Machine seems to slip through a legal loophole. May be it’s not explicit illegal to sell software build to to trick Google Adwords respectively AdSense or ClickBank, but using it will result in account terminations and AFAIK legal actions too.
  • Instant Booster claims to spam search engines, and it does, according to many reports. The common term applied to those techniques is web spam.

All these domains (and there are countless more sites selling similar scams from the desk of Sebastian Foss) are registered by Sebastian Foss respectively his companies e-trinity Internetmarketing GmbH or e-trinity Marketing Inc.

He’s in the business of newsgroup spam, search engine spam, comment spam … probably there’s no target left out. Searching for Sebastian Foss scam and similar search terms leads to tons of rip-off reports.

He’s even too lazy to rephrase his sales pitches, click a few of the links provided above, then search for quoted phrases you saw on every sales pitch to get the big picture. All that may be legal in Germany, I couldn’t care less, but it’s not legit. Creating and selling software for the sole purpose of spamming makes the software vendor a spammer. And he’s proud of it. He openly admits that he uses his software to spam blogs, search engines, newsgroups and whatever. He may make use of affiliates and virtual entities who send out the email spam, perhaps he got screwed by a chinese copycat selling his software via email spam, but is that relevant when the product itself is spammy?

What do you think, is every instance of Sebastian Foss a spammer? Feel free to vote in the comments.

Update 08/01/2007 Here is the next email from the desk of Sebastian Foss:

Hi,
thanks for the changes on your blog entry - however like i mentioned if you look up the domains which were advertised in the spam mails you will notice that they are not registered to me or my company. You can also see that visiting the sites you will see some guy took my products and is selling them for a lower price on his own websites where he is also copying all of my graphic files. The german police told me that they are receiving spam from your forms and that it goes directly to their trash… however please remove your entries about me from your blog - There is no sense in me selling my own products for a lower price on some cheap, stolen websites - if that would make sense then why do i have my own .com domains for my products ? I just want to make clear that im not sending out any spam mails - please get back to me.

Thanks,
Sebastian

Sebastian Foss
e-trinity Internetmarketing GmbH
sebastian@etrinity-mail.com

It deserves just a short reply:

It makes perfect sense to have an offshore clone in China selling the same outdated and pretty much questionable stuff a little cheaper. This clone can do that because first there’s next to no costs like taxes and so on, and second he does it per spamming my inbox on a daily base, hence probably he sells a lot of the ‘borrowed’ stuff. Whether or not the multiple Sebastian Fosses are the same natural person is not my problem. I claim nothing but leave it up to you dear reader’s speculation, common sense, and probability calculation.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Another way to implement a site search facility

Providing kick ass navigation and product search is the key to success for e-commerce sites. Conversion rates highly depend on user friendly UIs which enable the shopper to find the desired product with a context sensitive search in combination with few drill-down clicks on navigational links. Unfortunately, the build-in search as well as navigation and site structure of most shopping carts simply sucks. Every online store is different, hence findability must be customizable and very flexible.

I’ve seen online shops crawling their product pages with a 3rd party search engine script because the shopping cart’s search functionality was totally and utterly useless. Others put fantastic efforts in self made search facilities which perfectly implement real life relations beyond the limitations of the e-commerce software’s data model, but need code tweaks for each and every featured product, specials, virtual shops assembling a particular niche from several product lines or whatever. Bugger.

Today I stumbled upon a very interesting approach which could become the holy grail for store owners suffering from crappy software. Progress invited me to discuss a product they’ve bought recently –EasyAsk– from a search geek’s perspective. Long story short, I was impressed. Without digging deep into the technology or reviewing implementations for weaknesses I think the idea behind that tool is promising.

Unfortunately, the EasyAsk Web site doesn’t provide solid technical and architectural information (I admit that I may have missed the tidbits within the promotional chatter), hence I try to explain it from what I’ve gathered today. Progress EasyAsk is a natural language interface connecting users to data sources. Users are shoppers, and staff. Data sources are (relational) databases, or data access layers (that is a logical tier providing a standardized interface to different data pools like all sorts of databases, (Web) services, an enterprise service bus, flat files, XML documents and whatever).

The shopper can submit natural language queries like “yellow XS tops under 30 bucks”. The SRP is a page listing tops and similar garments under 30.00$, size XS, illustrated with thumbnails of pics of yellow tops and bustiers, linked to the product pages. If yellow tops in XS are sold out, EasyAsk recommends beige tops instead of delivering a sorry-page. Now when a search query is submitted from a page listing suits, a search for “black leather belts” lists black leather belts for men. If the result set is too large and exceeds the limitations of one page, EasyAsk delivers drill-down lists of tags, categories and synonyms until the result set is viewable on one page. The context (category/tag tree) changes with each click and can be visualized for example as bread crumb nav link.

Technically spoken, EasyAsk does not deal with the content presentation layer itself. It returns XML which can be used to create a completely new page with a POST/GET request, or it gets invoked as AJAX request whose response just alters DOM objects to visualize the search results (way faster but not exactly search engine friendly - that’s not a big deal because SERPs shouldn’t be crawlable at all). Performance is not an issue from what I’ve seen. EasyAsk caches everything so that the server doesn’t need to bother the hard disk. All points of failure (WRT performance issues) belong to the implementation, thus developing a well thought out software architecture is a must-have.

Well, that’s neat, but where’s the USP? EasyAsk comes with a natural language (search) driven admin interface too. That means that product managers can define and retrieve everything (attributes, synonyms, relations, specials, price ranges, groupings …) using natural language. “Gimme gross sales of leather belts for men II/2007 compared to 2006″ delivers a statistic and “top is a synonym for bustier and the other way round” creates a relation. The admin interface runs in the Web browser, definitions can be submitted via forms and all admin functions come with previews. Really neat. That reduces the workload of the IT dept. WRT ad-hoc queries as well as for lots of structural change requests, and saves maintenance costs (Web design / Web development).

I’ve spotted a few weak points, though. For example in the current version the user has to type in SKUs because there’s no selection box. Or meta data are stored in flat files, but that’s going to change too. There’s no real word stemming, EasyAsk handles singular/plural correctly and interprets “bigger” as “big” or “xx-large” politically correct as “plus”, but typos must be collected from the “searches without results” report and defined as synonym. The visualization of concurrent or sequentially applied business rules is just rudimentary on preview pages in the admin interface, so currently it’s hard to track down why particular products get downranked respectively highlighted when more than one rule applies. Progress told me that they’ll make use of 3rd party tools as well as in house solutions to solve these issues in the near future - the integration of EasyAsk into the Progress landscape has just begun.

The definitions of business language / expected terms used by consumers as well as business rules are painless. EasyAsk has build-in mappings like color codes to common color names and vice versa, understands terms like “best selling” and “overstock”, and these definitions are easy to extend to match actual data structures and niche specific everyday language.

Setting up the product needs consultancy (as a consultant I love that!). To get EasyAsk running it must understand the structure of the customer’s data sources, respectively the methods provided to fetch data from various structured as well as unstructured sources. Once that’s configured, EasyAsk pulls (database) updates on schedule (daily, hourly, minutely or whatever). It caches all information needed to fulfill search requests, but goes back to the data source to fetch real time data when the search query requires knowledge of not (yet) cached details. In the beginning such events must be dealt with, but after a (short) while EasyAsk should run smoothly without requiring much technical interventions (as a consultant I hate that, but the client’s IT department will love it).

Full disclosure: Progress didn’t pay me for that post. For attending the workshop I got two books (”Enterprise Service Bus” by David A. Chappel and “Getting Started with the SID” by John P. Reilly) and a free meal, travel expenses were not refunded. I did not test the software discussed myself (yet), so perhaps my statements (conclusions) are not accurate.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Blogger abuses rel-nofollow due to ignorance

I had planned a full upgrade of this blog to the newest blogger version this weekend. The one and only reason to do the upgrade was the idea that I perhaps could disable the auto-nofollow functionality in the comments. Well, what I found was a way to dofollow the author’s link by editing the <dl id='comments-block'> block, but I couldn’t figure out how to disable the auto-nofollow in embedded links.

Considering the hassles of converting all the template hacks into the new format, and the risk of most probably losing the ability to edit code my way, I decided to stick with the old template. It just makes no sense for me to dofollow the author’s link, when a comment author’s links within the content get nofollow’ed automatically. Andy Beard and others will hate me now, so let me explain why I don’t move this blog to my own domain using a not that insane software like WordPress.

  • I own respectively author on various WordPress blogs. Google’s time to index for posts and updates from this blogspot thingy is 2-3 hours (Web search, not blog search). My Wordpress blogs, even with higher PageRank, suffer from a way longer time to index.
  • I can’t afford the time to convert and redirect 150 posts to another blog.
  • I hope that Google/Blogger can implement reasonable change requests (most probably that’s just wishful thinking).

That said, WordPress is a way better software than Blogger. I’ll have to move this blog if Blogger is not able to fulfill at least my basic needs. I’ll explain below why I think that Blogger lacks any understanding of the rel-nofollow semantics. In fact, they throw nofollow crap on everything they get a hand on. It seems to me that they won’t stop jeopardizing the integrity of the Blogosphere (at least where they control the linkage) until they get bashed really hard by a Googler who understands what rel-nofollow is all about. I nominate Matt Cutts, who invented and evolved it, and who does not tolerate BS.

So here is my wishlist. I want (regardless of the template type!)

  • A checkbox “apply rel=nofollow to comment author links”
  • A checkbox “apply rel=nofollow to links within comment text”
  • To edit comments, for example to nofollow links myself, or to remove offensive language
  • A checkbox “apply rel=nofollow to links to label/search pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to label/search pages”
  • A checkbox “apply rel=nofollow to links to archive pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to archive pages”
  • A checkbox “apply rel=nofollow to backlink listings”

As for the comments functionality, I’d understand when these options get disabled when comment moderation is set to off.

And here are the nofollow-bullshit examples.

  • When comment moderation and captchas are activated, why are comment author links as well as links within the comments nofollow’ed? Does blogger think their bloggers are minor retards? I mean, when I approve a comment, then I do vouch for it. But wait! I can’t edit the comment, so a low-life link might slip through. Ok, then let me edit the comments.
  • When I’ve submitted a comment, the link to the post is nofollowed. Nofollow insane II.This page belongs to the blog, so why the fudge does Blogger nofollow navigational links? And if it makes sense for a weird reason not understandable by a simple webmaster like me, why is the link to the blog’s main page as well as the link to the post one line below not nofollow’ed? Linking to the same URL with and without rel-nofollow on the same page deserves a bullshit award.
  • Nofollow insane III. (dashboard)On my dashbord Blogger features a few blogs as “Blogs Of Note”, all links nofollow’ed. These are blogs recommended by the Blogger crew. That means they have reviewed them and the links are clearly editorial content. They’re proud of it: “we’ve done a pretty good job of publishing a new one each day”. Blogger’s very own Blogs Of Note blog does not nofollow the links, and that’s correct.

    So why the heck are these recommended blogs nofollow’ed on the dashboard? Nofollow insane III. (blogspot)

  • Blogger inserted robots meta tags “nofollow,noindex” on each and every blog hosted outside the controlled blogspot.com domain earlier this year.
  • Blogger inserted robots meta tags “nofollow,noindex” on Google blogs a few days ago.

If Blogger’s recommendation “Check google.com. (Also good for searching.)” is a honest one, why don’t they invest a few minutes to educate themselves on rel-nofollow? I mean, it’s a Google-block/avoid-indexing/ranking-thingy they use to prevent Google.com users from finding valuable contents hosted on their own domains. And they annoy me. And they insult their users. They shouldn’t do that. That’s not smart. That’s not Google-ish.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28  Next Page »