Sphinn rocks

Thanks to Danny’s crew we’ve got a promising search geek community site. Since I’ve recently started to deal with invites, here is the top secret link where you get your free Sphinn invite. Click it now and join today, as Gorbachev said ‘those who are late will be punished by life itself’ ;)

Previous experiments revealed that my pamphlets aren’t diggworthy, despite the presence of OL/UL lists. Because I mention search and stuff like that every once in a while, I decided to submit a horror story to Sphinn to test the waters over there.

Adding Sphinn-it! widgets to my posts hopefully helps promoting Sphinn, but with Blogger that turned into kinda nightmare. To prevent you from jumping through infinite try-and-error hoops, here is how it works:

Classic templates:

Search for $BlogItemBody$ and below the </div> put

<script type='text/javascript'>submit_url='<$BlogItemPermalinkUrl$>';</script>
<script src=’http://sphinn.com/evb/button.php’ type=’text/javascript’/></script>

(Blogger freaks out when you omit the non-standard ;</script> after the self-closing second tag, hence stick with the intentional syntax error.)

Newish templates:

Check “Expand Widget Templates”

Search for data:post.body/ and below the </p> put

<b:if cond='data:post.url'>
<p><script type=’text/javascript’>submit_url=’<data:post.url/>’;</script>
<script src=’http://sphinn.com/evb/button.php’ type=’text/javascript’/></p>
</b:if>

(After saving the changes Blogger replaces some single quotes with HTML entities, but it works though. Most probably one could do that in a more elegant way, but once I saw the badges pointing to the correct URL –both in the posts and on the main page– I gave up.)

Have fun sphinning my posts!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google helps those who help themselves

And if that’s not enough to survive on Google’s SERPs, try Google’s Webmaster Forum where you can study Adam Lasnik’s FAQ which covers even questions the Webmaster Help Center provides no comprehensive answer for (yet), and where Googlers working in Google’s Search Quality, Webspam, and Webmaster Central teams hang out. Google dumps all sorts of questioners to the forum, where a crowd of hardcore volunteers (aka regulars as Google calls them) invests a lot of time to help out Webmasters and site owners facing problems with the almighty Google.

Despite the sporadic posts by Googlers, the backbone of Google’s Webmaster support channel is this crew of regulars from all around the globe. Google monitors the forum for input and trends, and intervenes when the periodic scandal escalates every once in a while. Apropos scandal … although the list of top posters mentions a few of the regulars, bear in mind that trolls come with a disgusting high posting cadency. Fortunately, currently the signal drowns the noise (again), and I appreciate very much that the Googlers participate more and more.

Some of the regulars like seo101 don’t reveal their URLs and stay anonymous. So here is an incomplete list of folks giving good advice:

If I’ve missed anyone, please drop me a line (I stole the list above from JLH and Red Cardinal, so it’s all their fault!).

So when you’re a Webmaster or site owner, don’t hesitate to post your Google related question (but read the FAQ before posting, and search for your topics), chances are one of these regulars or even a Googler offers assistance. Otherwise when you’re questionless carrying a swag of valuable answers, join the group and share your knowledge. Finally, when you’re a Googler, donate the sites linked above a boost on the SERPs ;)

Micro-meme started by John Honeck, supported by Richard Hearne, Bert Vierstra



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Now Powncing

John, thanks for the invite!Inspired by all the twits about pownce I submitted my email addy too. What a useless procedure. From inside there’s no list of submitted email addresses to pick friends from. Or I’m too blind to find that page.

Probably the best procedure to get rid of the 6 invites is to sell them at eBay. Perhaps Pownce releases 6 new invites then and I get rich quick. Wait … I’ve a better idea. Submit your honest review of this blog in the comments and send me the email addy for your invite. If your piece is funny or honest or vilifying enough to make me laugh I might invite you ;)

Ok, so what separates Pounce from Twitter and WS_FTP? Here are my first impressions.

Unfortunately, I will not see the ads, never. Hectic clicking on all links signed me up as a pro-member by accident. pro-crab Now Pownce blemishes my cute red crab with a “pro” label. I guess I got what I paid for. Paid? Yep, that’s the first difference, Pownce is not completely free. Spamming friends in 100 meg portions costs an annual fee of 20 bucks.

Next difference. There is no 140 bytes per message limit. Nice. And the “Send to” combo box is way more comfortable than the corresponding functionality at Twitter. I miss Twitter’s “command line options” like “d username” and “@username”. Sounds schizophrenic perhaps, but I’m just greedy.

I figured out how to follow someone without friending. Just add somebody as friend and (you don’t need to) wait for the decline, this makes you a fan of other users. You get their messages but not the other way round. Twitter’s “add as friend” and “follow user” is clearer I think.

Searching for the IM setup I learned there’s none. Pownce expert John said I’ve to try the desktop thingy but it looks like AIM 1999, so I refuse the download and stick with the Web interface until Pownce interacts with GTalk. The personal pounce page has a refresh link at least, but no auto-refresh like Twitter.

There’s no way to bookmark messages or threads yet, and the link to the particular messages is somewhat obfuscated. The “email a bug report” is a good replacement for a “beta” label. I guess I’ll use it to tell Pownce that I hate their link manipulation applying rel-nofollow crap. I’ll play with the other stuff later on, the daddy-cab is due at the kindergarden. Hopefully, when I return, there will be a Pownce badge available for this blog, I’ve plenty of white space left on my sidebar.


Back, still no badge, but I realized that I forgot to mention the FTP similarities. And there is no need to complete this post, since I found Tamar’s brilliant Twitter vs. Pownce article.

Update: How to post to Twitter and Pownce at the same time (a Twitterfeed work around, I didn’t test this configuration)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

LZZR Linking™

LZZR Link LoveIn why it is a good thing to link out loud LZZR explains a nicely designed method to accelerate the power of inbound links. Unfortunately this technique involves Yahoo! Pipes, which is evil. Certainly that’s a nice tool to compose feeds, but Yahoo! Pipes automatically inserts the evil nofollow crap. Hence using Pipes’ feed output to amplify links faults caused by the auto-nofollow. I’m sure LZZR can replace this component with ease, if that’s not done already.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Why eBay and Wikipedia rule Google’s SERPs

It’s hard to find an obscure search query like [artificial link] which doesn’t deliver eBay spam or a Wikipedia stub within the first few results at Google. Although both Wikipedia and eBay are large sites, the Web is huge, so two that different sites shouldn’t dominate the SERPs for that many topics. Hence it’s safe to say that many nicely ranked search results at Googledia, pulled from eBaydia, are plain artificial positioned non-results.

Curious why my beloved search engine fails so badly, I borrowed a Google-savvy spy from GHN and sent him to Mountain View to uncover the eBaydia ranking secrets. He came back with lots of pay-dirt scraped from DVDs in the safe of building 43. Before I sold Google’s ranking algo to Ask (the price Yahoo! and MSN offered was laughable), I figured out why Googledia prefers eBaydia from comments in the source code. Here is the unbelievable story of a miserable failure:

When Yahoo! launched Mindset, Larry Page and Sergey Brin threw chairs out of anger because Google wasn’t able to accomplish such a simple task. The engineers, eager to fulfill their founder’s wishes asap, tried to integrate mindset-functionality without changing Google’s fascinating simple search interface (that means without a shopping/research slider). Personalized search still lived in the labs, but provided a somewhat suitable API (mega beta): scanSearchersBrainForContext([search query]). Not knowing that this function of personalized search polls a nano-bugging-device (pre alpha) which Google had not yet released nor implemented into any searcher’s brain at this time, they made use of that piece of experimental code to evaluate the search query’s context. Since the method always returned “false”, though they had to deliver results quickly, they made up some return values to test their algo tweaks:

/* debug - praying S&L don't throw more chairs */
if (scanSearchersBrainForContext($searchQuery) === false) then {
$contextShopping = “%ebay%”;
$contextResearch = “%wikipedia%”;
$context = both($contextShopping, $contextResearch);
}
else {[pretty complex algo])

This worked fine and found its way into the ranking algo under time pressure. The result is that with each and every search query where a page from eBay and/or Wikipedia is in the raw result set, those get a ranking boost. Sergey was happy because eBay is generally listed on page #1, and Larry likes the Wikipedia results on the first SERP. Tell me why the heck should the engineers comment out these made up return values? No engineer on this planet likes flying chairs, especially not in his office.

PS: Some SEOs push Wikipedia stubs too.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Who is responsible for the paid link mess?

Look at this graph showing the number of [buy link] searches since 2004:

Interestingly this search term starts out in September or October 2004, and shows a quite stable trend until the recent paid links debate started.

Who or what caused SEOs to massively buy links since 2004?

  • The Playboy interview with Google cofounders Larry Page and Sergey Brin just before Google was about to go public?
  • Google’s IPO?
  • Rumors that Google ran out of index space and therefore might restrict the number of doorway pages in the search index?
  • Nick Wilson preparing the launch of Threadwatch?
  • AdWords and Overture no longer running gambling ads?
  • The Internet Advancement scandal?
  • Google’s shortage of beer at the SES Google dance?
  • A couple UK based SEOs invented bought organic rankings?

Seriously, buying links for rankings was an established practice way before 2004. If you know the answer, or if you’ve a somewhat plausible theory, leave it in the comments. I’m really curious. Thanks.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Letting friends know you read their stuff

With various social tools and gadgets there are tons of opportunities to publically or privately show that you follow your friends. I can digg my friends’ articles, or bookmark them at delicious, I can link to their posts via sharing in Google Reader, or after reading their posts in my preferred feed reader, I can click the link too just to push my red crab image to the top of their MBL and BUMPzee widgets.

All that comes with common hassles. I want to use these social gadgets and services without jumps thru unintended hoops, that is I consider all the above mentioned methods to tell friends that I still love them diverting those services from their intended use. Also, not every friend of mine makes use of all these geeky tools, so I need to digg posts of A., to delicious articles by B., to share posts of C., and to visit the blogs of D., E. and F. just to show that I’ve read their stuff in my feed reader.

I can’t do that, at least not in a reliable manner, especially not when I’m swamped and just try to catch up after 12 or more hours of dealing with legacy applications or other painful tasks like meetings with wannabe-geeks (unexperienced controllers or chiefs of whichever-useless-service-center) respectively anti-geeks (know-it-all but utterly-clueless and dangerous-to-the-company’s-safety IT managers). Doh!

So when I’m not able to send my friends a twitter-great-job-message or IM, and don’t have the time to link to their stuff, should I feel bad? Probably. Penalties are well deserved. Actually, the consequence is that nice guys like Nick Wilson @Metaversed unfriend me (among other well-meaning followers) at Twitter coz “I didn’t provide useful input for a while”, not knowing that I follow them with interest, read their posts and all that, but just can’t contribute at the moment because their actual field of interest doesn’t match my time schedule respectively my todays-hot-topic-list, nor my current centre of gravity, so to say. That does not mean I’m not interested in whatever they do and output, I just can’t process it ATM but I know that’ll change at some point in the future. Hey, geeks usually hop from today’s hot thing to tomorrow’s hot thing, and flashbacks are rather natural, so why expect continuousness?

Bugger, I wrote four paragraphs and didn’t come to the point expectable from the post’s title. And I bored you dear readers with lots of title bait recently. Sorry, but I did enjoy it. Ok, here’s the message:

Everybody monitors referrer stats. Don’t say you don’t do it because that’s first a lie and second a natural thing to do. That applies to ego searches too by the way. So why don’t we make use of referrer spoofing to send a signal to our friends? It’s that easy. Just add the referrer-spoofing widget to your PrefBar, enter your URL, and surf on. Well, technically that’s referrer spamming, so if you wear a tinfoil hat use a non-indexable server like example.com. I’m currently surfing with the HTTP_REFERER “http://www.example.com/gofuckyourself” but I’m going to change that to this blog’s URL. Funny folks visiting my blog provide bogus referrers like “http://spamteam.google.com/” and “http://corp.google.com:8080/webspam/watchlist.py”, so why the fuck shouldn’t I use my actual address? This will tell my friends that I still love them. And real geeks shouldn’t expect unforged referrer stats, since many nice guys surf without spamming the server logs with a referrer.

What do you think?



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google assists SERP Click-Through Optimization

Big Mama Google in her ongoing campaign to keep her search index clean assists Webmasters with reports allowing click-trough optimization of a dozen or so pages per Web site. Google launched these reports a while ago, but most Webmasters didn’t make the best use of them. Now that Vanessa has revealed her SEO secrets, lets discuss why and how Google helps increasing, improving, and targeting search engine traffic.

Google is not interested in gazillions of pages which rank high for (obscure) search terms but don’t get clicked from the SERPs. This clutter tortures the crawler and indexer, and it wastes expensive resources the query engine could use to deliver better results to the searchers.

Unfortunately, legions of clueless SEOs work hard to increase mount clutter by providing their clients with weekly ranking reports, what leads to even more pages which rank for (potentially money making) search phrases but appear on the SERPs with such crappy titles and snippets that not even a searcher coming with an IQ slightly below a slice of bread clicks them.

High rankings don’t pay the bills, converting traffic from SERPs on the other hand does. A nicely ranking page is an asset, which in most cases just needs a few minor tweaks to attract search engine users (mount clutter contains machine generated cookie-cutter pages too, but that’s a completely other story).

For example unattended pages gaining their SERP position from anchor text of links pointing to them often have a crappy click through rate (CTR). Say you’ve a page about a particular aspect of green widgets, which applies to widgets of all colors. For some reason folks preferring red widgets like your piece and link to it with “red widgets” as anchor text. The page will rank fine for [red widgets], but since “red widgets” is not mentioned on the page this keyword phrase doesn’t appear on the SERP’s snippets, not to speak of the linked title. Search engine users seeking for information on red widgets don’t click the link about green widgets, although it might be the best matching search result.

So here is the click-thru optimization process based on Google’s query stats (it doesn’t work with brand new sites nor more or less unindexed sites, because the data provided in Google’s Webmaster Tools are available, reliable and quite accurate for somewhat established sites only):

Login, choose a site and go to query stats. In an ideal world you’ll see two tables of rather identical keyword lists (all examples made up).

Top search queries Avg.
Pos.
Top SERP clicks Avg.
Pos.
1. web site design 5 1. web site design 4
2. google consulting 4 2. seo consulting 5
3. seo consulting 3 3. google consulting 2
4. web site structures 2 4. internal links 3
5. internal linkage 1 5. web site structure 3
6. crawlability 3 6. crawlability 5

The “Top search queries” table on the left shows positions for search phrases on the SERPs, regardless whether these pages got clicks or not. The “Top search query clicks” table on the right shows which search terms got clicked most, and where the landing pages were positioned on their SERPs. If good keywords appear in the left table but not in the right one, you’ve CTR optimization potentials.

The “average top position” might differ from todays SERPs, and it might differ for particular keywords even if those appear in the same line in both tables. Positioning fluctuation depends on a couple of factors. First, the position is recorded at the run time of each search query during the last 7 days, and within seven days a page can jump up and down on the SERPs. Second, positioning on for example UK SERPs can differ from US SERPs, so an average 3rd position may be a utterly useless value, when a page ranks #1 in the UK and gets a fair amount of traffic from UK SERPs, but ranks #8 on US SERPs and searchers don’t click it because the page is about a local event near Loch Nowhere in the highlands. Hence refine the reports by selecting your target markets in “location”, and if necessary “search type” too. Third, if these stats are generated based on very few searches and even fewer click throughs, they are totally and utterly useless for optimization purposes.

Lets say you’ve got a site with a fair amount of Google search engine traffic, the next step is identifying the landing pages involved (you get only 20 search queries, so the report covers only a fraction of your site’s pages). Pull these data from your referrer stats, or extract SERP referrers from your logs to create a crosstab of search terms from Google’s reports per landing page. Although the click data are from Google’s SERPs, it might make sense to do this job with a broader scope, that is including referrers from all major search engines.

Now perform the searches for your 20 keyword phrases (just click on the keywords on the report) to check how your pages look at the SERPs. If particular landing pages trigger search results for more than one search term, extract them all. Then load your landing page, and view its source. Read your page first rendered in your browser, then check out semantic hints in the source code, for example ALT or TITLE text and stuff like that. Look at the anchor text of incoming links (you can use link stats and anchor text stats from Google, We Build Pages Tools, …) and other ranking factors to understand why Google thinks this page is a good match for the search term. For each page, let the information sink before you change anything.

If the page is not exactly a traffic generator for other targeted keywords, you can optimize it with regard to a better CTR for the keyword(s) it ranks for. Basically that means use the keyword(s) naturally on all page areas where it makes sense, and provide each occurence with a context which hopefully makes it into the SERP’s snippet.

Make up a few natural sentences a searcher might have in mind when searching for your keyword(s). Write them down. Order them by their ability to fit the current page text in a natural way. Bear in mind that with personalized search Google could have scanned the searcher’s brain to add different contexts to the search query, so don’t concentrate too much on the keyword phrase alone, but on short sentences containing both the keyword(s), respectively their synonyms, and a sensible context as well.

There is no magic number like “use the keywords 5 times to get a #3 spot” or “7 occurences of a keyword gain you a #1 ranking”. Optimal keyword density is a myth, so just apply common sense by not annoying human readers. One readable sentence containing the keyword(s) might suffice. Also, emphasizing keywords (EM/I, STRONG/B, eye catching colors …) makes sense because it helps catching the attention of scanning visitors, but don’t over-emphasize because that looks crappy. The same goes for H2/H3/… headings. Structure your copy, but don’t write in headlines. When you emphasize a word or phrase in (bold) red, then don’t do that consistently but only in the most important sentence(s) of your page, and better only on the first visible screen of a longer page.

Work in your keyword+context laden sentences, but -again!- do it in a natural way. You’re writing for humans, not for algos which at this point already know what your page is all about and rank it properly. If your fine tuning gains you a better ranking that’s fine, but the goal is catching the attention of searchers reading (in most cases just skimming) your page title and a machine generated snippet on a search result page. Convince the algo to use your inserted sentence(s) in the snippet, not keyword lists from navigation elements or so.

Write a sensible summary of the page’s content, not more than 200-250 characters, and put that into the description meta tag. Do not copy the first paragraph or other text from the page. Write the summary from scratch instead, and mention the targeted keyword(s). The first paragraph on the page can exceed the length of the meta description to deliver an overview of the page’s message, and it should provide the same information, preferably in the first sentence, but don’t make it longish.

Check the TITLE tag in HEAD: when it is truncated on the SERP then shorten it so that the keyword becomes visible, perhaps move the keyword(s) to the beginning, or create a neat page title around the keyword(s). Do title changes very carefully, because the title is an important ranking factor and your changes could result in a ranking drop. Some CMSs change the URL without notice on changes of the title text, and you certainly don’t want to touch the URL at this point.

Make sure that the page title appears on the page too. Putting the TITLE tag’s content (or a slight variation) in a H1 element in BODY cannot hurt. If you for some weird reasons don’t use H-elements, then at least format it prominently (bold, different color but not red, bigger font size …).

If the page performs nice with a couple money terms and just has a crappy CTR for a particular keyword it ranks for, you can just add a link pointing to a (new) page optimized for that keyword(s), with the keyword(s) in the anchor text, preferably embedded in a readable sentence within the content (long enough to fill two lines under the linked title on the SERP), to improve the snippet. Adding a (prominent) link to a related topic should not impact rankings for other keywords too much, but the keywords submitted by searchers should appear in the snippet a short while after the next crawl. In such cases better don’t change the title, at least not now. If the page gained its ranking solely from anchor text of inbound links, putting the search term on the page can give it a nice boost.

Make sure you get an alert when Ms. Googlebot fetches the changed pages, and check out the SERPs and Google’s click stats a few days later. After a while you’ll get a pretty good idea of how Google creates snippets, and which snippets perform best on the SERPs. Repeat until success.

Related posts:
Google Quality Scores for Natural Search Optimization by Chris Silver Smith
Improve SERP-snippets by providing a good meta description tag by Raj Krishnan from Google’s Snippets Team



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Killing Trolls in Google Groups

Are you tired of trolls and dickheads in the Google Groups? Then switch to FireFox, install Greasemonkey and Damian’s Google Groups KillFile. Go to your favorite groups and click “ignore user” to populate your ignore list. Without trolling your Usenet or Google Group will look way friendlier.

You should read (and subscribe to) Damian’s troll-killer post and the comments though, just in case Google changes the layout again or there’s a bugfix. For example when I’ve the troll filter activated, I don’t see threads when a troll posted the last reply.

Remember: The canonical treatment of trolls is ignoring their posts, regardless a particular posts’s insanity or the lack of it. Not even insults, slander or name calling justifies a reply to the troll. If necessary, forward the post to your lawyers, but don’t enter a discussion with a troll because that feeds their ego and encourages them to produce even more crap.

Hat tip to ThoughtOn

Related post: What to do with [troll’s handle]? by JLH



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Playing with Google Translate (still beta)

I use translation tools quite often, so after reading Google’s Udi Manber - Search is a Hard Problem I just had to look at Google Translate again.

Under Text and Web it offers the somewhat rough translations available from the toolbar and links on SERPs. Usually, I use that feature only with languages I don’t speak to get an idea of the rough meaning, because the offered translation is, well, rough. Here’s an example. Translating “Don’t make a fool of yourself” to German gives “einen Dummkopf nicht von selbst bilden”. That means “not forming a dullard of its own volition” but Google’s reverse translation “a fool automatically do not educate” is even funnier.

Coming with at least rudimentary practices in foreign languages really helps reading Google’s automated translations. Quite often the translation is just not understandable without knowledge of the other language’s grammar and distinctiveness. For example my french is a bit rusty, so translating Le Monde to english leads to understandable text I can read way faster than the original. Italian to English is another story (my italian skills should be considered “just enough for tourists”), for example the frontpage of la Repubblica is, partly due to the summarizing language, hard to read in Google’s english translation. Translated articles on the other hand are rather understandable.

By the way, the quality of translated news, technical writings or academic papers is much better than rough translations of everyday language, so better don’t try to get any sense out of translated forum posts and stuff like that. Probably that’s caused by the lack of trusted translations of these sources which are necessary to train Google’s algos.

Google Translate fails miserably sometimes. Although arabic-english is labelled “BETA”, it cannot translate even a single word from the most important source of news in arabic, Al Jazeera - it just delivers a copy of the arabic home page. Ok, that’s a joke, all the arabic text is provided on images. Translations of Al Jazeera’s articles are terrific, way better than any automated translation from or to european languages I’ve seen, ever. Comparing Google’s translation of the Beijing Review to the english edition makes no sense due to sync issues, but the automated translation looks great, even the headlines make sense (semantically, not in their meanings - but what do I know, I’m not a stalinistic commie killing and jailing dissidents practicing human rights like the freedom of speech).

On the second tab Google translates search results, that’s a neat way to research resources in other languages. You can submit a question in english, Google translates it on the fly to the other language, queries the search index with the translated search term and delivers a bilingual search result page, english in the left column and the foreign language on the right side. I don’t like that the page titles are truncated, also the snippets are way too short to make sense in most cases. However, it is darn useful. Let’s test how Google translates her own pamphlets:

A search in english for [Google Webmaster guidelines] on german pages delivers understandable results. The second search result, “Der Ankauf von Links mit der Absicht, die Rangfolge einer Website zu verbessern, ist ein Verstoß gegen die Richtlinien für Webmaster von Google”, gets translated to “The purchase from left with the intention of improving the order of rank of a Website is an offence against the guidelines for Web master of Google”. Here it comes straight from the horse’s mouth: Google’s very own Webmasters must not sell links on the left sidebar of pages on Google.com. I’m not a Webmaster at Google, so in my book that means I can remove the crappy nofollow from tons of links as long as I move them to the left sidebar. (Seriously, the german noun for “link” is “Verbindung” respectively “Verweis”, which both have tons of other meanings besides “hyperlink”, so everybody in Germany uses “Link” and the plural “Links”, but “links” means “left” and Google’s translator ignores capitalization as well as anglicisms. The german translation of “Google’s guidelines for Webmasters” as “Richtlinien für Webmaster von Google” is quite hapless by the way. It should read “Googles Richtlinien für Webmaster” because “Webmaster von Google” really means “Webmasters of Google” which is (in German) a synonym for “Google’s [own] Webmasters”.)

An extended search like [Google quality guidelines hidden links] for all sorts of terms from the guidelines like “hidden text”, “cloaking”, “doorway page” (BTW why is the page type described as “doorway page” in reality a “hallway page”, and why doesn’t explain Google the characteristics of deceitfully doorway pages, and why doesn’t Google explain that most (not machine generated) doorway pages are perfectly legit landing pages?), “sneaky redirects” and many more did not deliver a single page from google.de on the first SERP. No wonder that german Internet marketers are the worst spammers on earth when Google doesn’t tell them what particular techniques they should avoid. Hint for Riona: to improve findability consider adding these tags untranslated to all versions of the help system in foreign languages. Hint for Matt: please admit that not each and every doorway page is violating Google’s guidelines. A well done and compelling doorway page just highlights a particular topic, hence from a Webmaster’s as well as from a search engine’s perspective that’s perfectly legit “relevance bait” (I can resist to call it spider fodder because it really ain’t that in particular).

Ok, back to the topic.

I really fell in love with the recently added third tab Dictionary. This tool beats the pants off Babylon and other word translators when it comes to lookups of single words, but it lacks the reverse functionality provided by these tools, that is the translations of phrases. And it’s Web based, so (for example) a middle mouse click on a word or phrase in any application except of my Web browser with Google’s toolbar enabled doesn’t show the translation. Actually, the quality of one-word lookups is terrific, and when you know how to search you get phrases too. Just play and get familar with it, then when you’ve at least a rudimentary understanding of the other language you’ll often get the desired results.

Well, not always. Submitting “schlagen” (”beat”) in German-English mode when I search for a phrase like “beats the pants off something” leads to “outmatch” (”übertreffen, (aus dem Felde) schlagen”) as best match. In reverse (English-German) “outmatch” is translated to “übertreffen, (aus dem Felde) schlagen” without alternative or supplemental results, but “beat” has tons of german results, unfortunately without “beats the pants off something”.

I admit that’s unfair, according to the specs the dictionary thingy is not able to translate phrases (yet). The one-word translations are awesome, I just couldn’t resist to max it out with my tries to translate phrases. Hopefully Google renames “Dictionary” to “Words” and adds a tab “Phrases” soon.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29  Next Page »