Unavailable_After is totally and utterly useless

I’ve a lot of respect for Dan Crow, but I’m struggling with my understanding, or possible support, of the unavailable_after tag. I don’t want to put my reputation for bashing such initiatives from search engines at risk, so sit back and grab your popcorn, here comes the roasting:

As a Webmaster, I did not find a single scenario where I could or even would use it. That’s because I’m a greedy traffic whore. A bazillion other Webmasters are greedy too. So how the heck is Google going to sell the newish tag to the greedy masses?

Ok, from a search engine’s perspective unavailable_after makes sound sense. Outdated pages bind resources, annoy searchers, and in a row of useless crap the next bad thing after an outdated page is intentional Webspam.

So convincing the great unwashed to put that thingy on their pages inviting friends and family to granny’s birthday party on 25-Aug-2007 15:00:00 EST would improve search quality. Not that family blog owners care about new meta tags, RFC 850-ish date formats, or search engine algos rarely understanding that the announced party is history on Aug/26/2007. Besides there may be painful aftermaths worth submitting a desperate call for aspirins the day after in the comments, what would be news of the day after expiration. Kinda dilemma, isn’t it?

Seriously, unless CMS vendors support the new tag, tiny sites and clique blogs aren’t Google’s target audience. This initiative addresses large sites which are responsible for a huge amount of outdated contents in Google’s search index.

So what is the large site Webmaster’s advantage of using the unavailable_after tag? A loss of search engine traffic. A loss of link juice gained by the expired page. And so on. Losses of any kind are not that helpful when it comes to an overdue raise nor in salary negotiations. Hence the Webmaster asks for the sack when s/he implements Google’s traffic terminator.

Who cares about Google’s search quality problems when it leads to traffic losses? Nobody. Caring Webmasters do the right thing anyway. And they don’t need no more useless meta tags like unavailable_after. “We don’t need no stinking metas” from “Another Brick in the Wall Part Web 2.0″ expresses my thoughts perfectly.

So what separates the caring Webmaster from the ‘ruthless traffic junky’ who Google wants to implement the unavailable_after tag? The traffic junkie lets his stuff expire without telling Google about it’s state, is happy that frustrated searchers click the URL from the SERPs even years after the event, and enjoys the earnings from tons of ads placed above the content minutes after the party was over. Dear Google, you can’t convince this guy.

[It seems this is a post about repetitive “so whats”. And I came to the point before the 4th paragraph … wow, that’s new … and I’ve put a message in the title which is not even meant as link bait. Keep on reading.]

So what does the caring Webmaster do without the newish unavailable_after tag? Business as usual. Examples:

Say I run a news site where the free contents go to the subscription area after a while. I’d closely watch which search terms generate traffic, write a search engine optimized summary containing those keywords, put that on the sales pitch, and move the original article to the archives accessible to subscribers only. It’s not my fault that the engines think they point to the original article after the move. When they recrawl and reindex the page my traffic will increase because my summary fits their needs more perfectly.

Say I run an auction site. Unfortunately particular auctions expire, but I’m sure that the offered products will return to my site. Hence I don’t close the page, but I search my database for similar offerings and promote them under a H3 heading like “[product] (stuffed keywords) is hot” /H3 P buy [product] here: /P followed by a list of identical products for sale or similar auctions.

Say I run a poll expiring in two weeks. With Google’s newish near real time indexing that’s enough time to collect keywords from my stats, so the textual summary under the poll’s results will attract the engines as well as visitors when the poll is closed. Also, many visitors will follow the links to related respectively new polls.

From Google’s POV there’s nothing wrong with my examples, because the visitor gets what s/he was searching for, and I didn’t cheat. Now tell me, why should I give up these valuable sources of nicely targeted search engine traffic just to make Google happy? Rather I’d make my employer happy. Dear Google, you didn’t convince me.

Update: Tanner Christensen posted a remarkable comment at Sphinn:

I’m sure there is some really great potential for the tag. It’s just none of us have a need for it right now.

Take, for example, when you buy your car without a cup holder. You didn’t think you would use it. But then, one day, you find yourself driving home with three cups of fruit punch and no cup holders. Doh!

I say we wait it out for a while before we really jump on any conclusions about the tag.

John Andrews was the first to report an evil use of unavailable_after.

Also, Dan Crow from Google announced a pretty neat thing in the same post: With the X-Robots-Tag you can now apply crawler directives valid in robots meta tags to non-HTML documents like PDF files or images.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Analyzing search engine rankings by human traffic

Recently I’ve discussed ranking checkers at several places, and I’m quite astonished that folks still see some value in ranking reports. Frankly, ranking reports are –in most cases– a useless waste of paper and/or disk space. That does not mean that SERP positions per keyword phrase aren’t interesting. They’re just useless without context, that is traffic data. Converting traffic pays the bills, not sole rankings. The truth is in your traffic data.

That said, I’d like to outline a method to get a particular useful information out of raw traffic data: underestimated search terms. That’s not a new attempt, and perhaps you have the reports already, but maybe you don’t look at the information which is somewhat hidden in stats ordered by success, not failure. And you should be –respective employ– a programmer to implement it.

The first step is gathering data. Create a database table to record all hits, then in a footer include or so, when the complete page got outputted already, write all data you have in that table. All data means URL, timestamp, and variables like referrer, user agent, IP, language and so on. Be a data rat, log everything you can get hold of. With dynamic sites it’s easy to add page title, (product) IDs etcetera, with static sites write a tool to capture these attributes separately.

For performance reasons it makes sense to work with a raw data table, which has just a primary key, to log the requests, and normalized working tables which have lots of indexes to allow aggregations, ad hoc queries, and fast reports from different perspectives. Also think of regular purging the raw log table and historization. While transferring raw log data to the working tables in low traffic hours or on another machine you can calculate interesting attributes and add data from other sources which were not available to the logging process.

You’ll need that traffic data collector anyway for a gazillion of purposes where your analytics software fails, is not precise enough, or just can’t deliver a particular evaluation perspective. It’s a prerequisite for the method discussed here, but don’t build a monster sized cannon to chase a fly. You can gather search engine referrer data from logfiles too.

For example an interesting information is on which SERP a user clicked a link pointing to your site. Simplified you need three attributes in your working tables to store this info: search engine, search term, and SERP number. You can extract these values from the HTTP_REFERER.

http://www.google.com/search?q=keyword1+keyword2~
&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a

1. “google” in the server name tells you the search engine.
2. The “q” variable’s value tells you the search term “keyword1 keyword2″.
3. The lack of a “start” variable tells you that the result was placed on the first SERP. The lack of a “num” variable lets you assume that the user got 10 results per SERP, so it’s quite safe to say that you rank in the top 10 for this term. Actually, the number of results per page is not always extractable from the URL because it’s pulled from a cookie usually, but not so many surfers change their preferences (e.g. less than 0.5% surf with 100 results according to JohnMu and my data as well). If you’ve got a “num” value then add 1 and divide the result by 10 to make the data comparable. If that’s not precise enough you’ll spot it afterwards, and you can always recalculate SERP numbers from the canned referrer.

http://www.google.co.uk/search?q=keyword1+keyword2~
&hl=en&start=10&sa=N

1. and 2. as above.
3. The “start” variable’s value 10 tells you that you got a hit from the second SERP. When start=10 and there is no “num” variable, most probably the searcher got 10 results per page.

http://www.google.es/search?q=keyword1+keyword2~
&rls=com.microsoft:*&ie=UTF-8&oe=UTF-8&startIndex=~
&startPage=1

1. and 2. as above.
3. The empty “startIndex” variable and startPage=1 are useless, but the lack of “start” and “num” tells you that you’ve got a hit from the 1st spanish SERP.

http://www.google.ca/search?q=keyword1+keyword2~
&hl=en&rls=GGGL,GGGL:2006-30,GGGL:en&start=20~
&num=20&sa=N

1. and 2. as above.
3. num=20 tells you that the searcher views 20 results per page, and start=20 indicates the second SERP, so you rank between #21 and #40, thus the (averaged) SERP# is 3.5 (provided SERP# is not an integer in your database).

You got the idea, here is a cheat sheet and official documentation on Google’s URL parameters. Analyze the URLs in your referrer logs and call them with cookies off what disables your personal search preferences, then play with the values. Do that with other search engines too.

Now a subset of your traffic data has a value in “search engine”. Aggregate tuples where search engine is not NULL, then select the results for example where SERP number is lower or equal 3.99 (respectively 4), ordered by SERP number ascending, hits descending and keyword phrase, break by search engine. (Why sorted by traffic descending? You have a report of your best performing keywords already.)

The result is a list of search terms you rank for on the first 4 SERPs, beginning with keywords you’ve probably not optimized for. At least you didn’t optimize the snippet to improve CTR, so your ranking doesn’t generate a reasonable amount of traffic. Before you study the report, throw away your site owner hat and try to think like a consumer. Sometimes those make use of a vocabulary you didn’t think of before.

Research promising keywords, and decide whether you want to push, bury or ignore them. Why bury? Well, in some cases you just don’t want to rank for a particular search term, [your product sucks] being just one example. If the ranking is fine, the search term smells somewhat lucrative, and just the snippet sucks in a particular search query’s context, enhance your SERP listing.

Every once in a while you’ll discover a search term making a killing for your competitors whilst you never spotted it because your stats package reports only the best 500 monthly referrers or so. Also, you’ll get the most out of your rankings by optimizing their SERP CTRs.

Be crative, over time your traffic database becomes more and more valuable, allowing other unconventional and/or site specific reports which off-the-shelf analytics software usually does not deliver. Most probably your competitors use standard analytics software, individually developed algos and reports can make a difference. That does not mean you should throw away your analytics software to reinvent the wheel. However, once you’re used to self developed analytic tools you’ll think of more interesting methods not only to analyse and monitor rankings by human traffic than you can implement in this century ;)

Bear in mind that the method outlined above does not and cannot replace serious keyword research.

Another –very popular– approach to get this info would be automated ranking checks mashed up with hits by keyword phrase. Unfortunately, Google and other engines do not permit automated queries for the purpose of ranking checks, and this method works with preselected keywords, that means you don’t find (all) search terms created by users. Even when you compile your ranking checker’s keyword lists via various keyword research tools, you’ll still miss out on some interesting keywords in your seed list.

Related thoughts: Why regular and automated ranking checks are necessary when you operate seasonal sites by Donna



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Rediscover Google’s free ranking checker!

Nowadays we’re searching via toolbar, personalized homepage, or in the browser address bar by typing in “google” to get the search box, typing in a search query using “I feel lucky” functionality, or -my favorite- typing in google.com/search?q=free+pizza+service+nearby.

Old fashioned, uncluttered and nevertheless sexy user interfaces are forgotten, and pretty much disliked due to the lack of nifty rounded corners. Luckily Google still maintains them. Look at this beautiful SERP:
Google's free ranking checker
It’s free of personalized search, wonderful uncluttered because the snippets appear as tooltip only, results are nicely numbered from 1 to 1,000 on just 10 awesome fast loading pages, and when I’ve visited my URLs before I spot my purple rankings quickly.

http://google.com/ie?num=100&q=keyword1+keyword2 is an ideal free ranking checker. It supports &filter=0 and other URL parameters, so it’s a perfect tool when I need to lookup particular search terms.

Mass ranking checks are totally and utterly useless, at least for the average site, and penalized by Google. Well, I can think of ways to semi-automate a couple queries, but honestly, I almost never need that. Providing fully automated ranking reports to clients gave SEO services a more or less well deserved snake oil reputation, because nice rankings for preselected keywords may be great ego food, but they don’t pay the bills. I admit that with some setups automated mass ranking checks make sense, but those are off-topic here.

By the way, Google’s query stats are a pretty useful resource too.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Blogger to rule search engine visibility?

Via Google’s Webmaster Forum I found this curiosity:
http://www.stockweb.blogspot.com/robots.txt

User-agent: *
Disallow: /search
Disallow: /

A standard robots.txt at *.blogspot.com looks different:

User-agent: *
Disallow: /search
Sitemap: http://*.blogspot.com/feeds/posts/default?orderby=updated

According to the blogger the blog is not private, what would explain the crawler blocking:

It is a public blog. In the past it had a standard robots.txt, but 10 days ago it changed to “Disallow: /”

Copyscape thinks that the blog in question shares a fair amount of content with other Web pages. So does blog search:
http://stockweb.blogspot.com/2007/07/ukraine-stock-index-pfts-gained-97-ytd.html
has a duplicate, posted by the same author, at
http://business-house.net/nokia-nok-gains-from-n-series-smart-phones/,
http://stockweb.blogspot.com/2007/07/prague-energy-exchange-starts-trading.html
is reprinted at
http://business-house.net/prague-energy-exchange-starts-trading-tomorrow/
and so on. Probably a further investigation would reveal more duplicated contents.

It’s understandable that Blogger is not interested in wasting Google’s resources by letting Ms. Googlebot crawl the same contents from different sources. But why do they block other search engines too? And why do they block the source (the posts reprinted at business-house.net state “Originally posted at [blogspot URL]”)?

Is this really censorship, or just a software glitch, or is it all the blogger’s fault?

Update 07/26/2007: The robots.txt reverted to standard contents for unknown reasons. However, with a shabby link neigborhood as expressed in the blog’s footer I doubt the crawlers will enjoy their visits. At least the indexers will consider this sort of spider fodder nauseous.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Hey, there is content in the widgets!

Yeah, I do know the layout of this blog is somewhat cluttered. Especially the sidebar with all the JS script calls slowing down page loads. Not that Blogger page load times are exiting at all, especially not with the classic template. Forgive me, I just can’t stay away from fancy stuff.

Perhaps you’re not exactly interested in my twits telling you that my monsters are asleep and I can code untroubled, or that I’ve dugg or sphunn my friends’ posts. Perfectly legit votings of course, since we share that many interests so that I often like what my buddies write and submit to whatever social bookmarking services or communities.

Of course you couldn’t care less on stats like how many blogs in the Technorati universe (which is a tiny subset of the GoogleBlogSearch universe, which is a tiny subset of the blogosphere, which is a tiny subset of the Web … Ok, you don’t give a f***) link to my pamplets. Actually, here you could help me out, just put me on your blogroll. Honestly, the lack of backlinks is scandalous. Everybody reads my stuff but very few of you dear readers link to me. I don’t consider scrapers readers, so their links don’t count. Since my audience consists of 99% Webmasters, I hope all of you understand the syntax of my beloved A element. I promote lots of nice folks in my diverse blogroll sections, but very few return the honor. Not even the Google blog lists me under “What We’re Reading” (please notice the capital “W” indicating a pluralis majestatis), although I spam FeedFetcher with Google bashing quite frequently. Weird …

And no, the MBL users list doesn’t count as content (but it’s nice to see who visited), and the AdSense stuff is just informational (and remains unclicked by the way, you guys and gals are way too savvy). Oups, I did it again: four inexpressively paragraphs before I come to the point - vice.

Since I add widgets when I discover them, you’ve to scroll down for the GoogleReader thingy. It’s titled “Sebastian’s picked gems“, and I mean that.

When I stumble upon a great post, I share it. That does not mean that I agree 100%, perhaps I even disagree 100%, but when I share a post I believe it’s worth reading. Honestly, you wouldn’t read my pamplets if you wouldn’t share (a few of) my pet peeves, would you?

I guess it’s safe to assume that you’ll enjoy reading my shared articles. Good news is, you can subscribe to the feed of my selected readings. I don’t recycle news, so I don’t blog every tidbit I find on the ‘Net. Hence you should subscribe to the feed and read the content I’d like to have on my blog although I’m too busy (Ok Ok, that’s just a lame excuse for laziness) to publish it myself.

If you read my blog in your preferred feed reader, you’ll miss out on some exciting stuff!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Buying cheap viagra algorithmically

Since Google can’t manage to clean up [Buy cheap viagra] let’s do it ourselves. Go seek a somewhat trusted search blog mentioning “buy cheap viagra” somewhere in the archives and link to the post with a slightly diversified anchor text like “how to buy cheap viagra online“. Matt deserves a #1 spot by the way so spread many links …

Then when Matt is annoyed enough and Google has kicked out the unrelated stuff from this search hopefully my viagra spam will rank as deserved again ;)

Update a few hours later: Matt ranks #1 for [buy cheap viagra algorithmically]:
Matt Cutts's first spot for [buy cheap viagra algorithmically]
His ranking for [buy cheap viagra] fell about 10 positions to #17 but for [buy cheap viagra online] he’s still on the first SERP, now at position #10 (#3 yesterday). Interesting. It seems that Google’s newish turbo-blog-indexing influences the rankings of pages linked from blog posts relatively short dated but not exactly long lasting.

Related posts:
Negative SEO At Work: Buying Cheap Viagra From Google’s Very Own Matt Cutts - Unless You Prefer Reddit? Or Topix? by Fantomaster
Trust + keywords + link = Good ranking (or: How Matt Cutts got ranked for “Buy Cheap Viagra”) by Wiep



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Getting the most out of Google’s 404 stats

The 404 reports in Google’s Webmaster Central panel are great to debug your site, but they contain URLs generated by invalid –respectively truncated– URL drops or typos of other Webmasters too. Are you sick of wasting the link love from invalid inbound links, just because you lack a suitable procedure to 301-redirect all these 404 errors to canonical URLs?

Your pain ends here. At least when you’re on a *ix server running Apache with PHP 4+ or 5+ and .htaccess enabled. (If you suffer from IIS go search another hobby.)

I’ve developed a tool which grabs all 404 requests, letting you map a canonical URL to each 404 error. The tool captures and records 404s, and you can add invalid URLs from Google’s 404-reports, if these aren’t recorded (yet) from requests by Ms. Googlebot.

It’s kinda layer between your standard 404 handling and your error page. If a request results in a 404 error, your .htaccess calls the tool instead of the error page. If you’ve assigned a canonical URL to an invalid URL, the tool 301-redirects the request to the canonical URL. Otherwise it sends a 404 header and outputs your standard 404 error page. Google’s 404-probe requests during the Webmaster Tools verification procedure are unredirectable (is this a word?).

Besides 1:1 mappings of invalid URLs to canonical URLs you can assign keywords to canonical URLs. For example you can define that all invalid requests go to /fruit when the requested URI or the HTTP referrer (usually a SERP) contain the strings “apple”, “orange”, “banana” or “strawberry”. If there’s no persistent mapping, these requests get 302-redirected to the guessed canonical URL, thus you should view the redirect log frequently to find invalid URLs which deserve a persistent 301-redirect.

Next there are tons of bogus requests from spambots searching for exploits or whatever, or hotlinkers, resulting in 404 errors, where it makes no sense to maintain URL mappings. Just update an ignore list to make sure those get 301-redirected to example.com/goFuckYourself or a cruel and scary image hosted on your domain or a free host of your choice.

Everything not matching a persistent redirect rule or an expression ends up in a 404 response, as before, but logged so that you can define a mapping to a canonical URL. Also, you can use this tool when you plan to change (a lot of) URLs, it can 301-redirect the old URL to the new one without adding those to your .htaccess file.

I’ve tested this tool for a while on a couple of smaller sites and I think it can get trained to run smoothly without too many edits once the ignore lists etcetera are up to date, that is matching the site’s requisites. A couple of friends got the script and they will provide useful input. Thanks! If you’d like to join the BETA test drop me a message.

Disclaimer: All data get stored in flat files. With large sites we’d need to change that to a database. The UI sucks, I mean it’s usable but it comes with the browser’s default fonts and all that. IOW the current version is still in the stage of “proof of concept”. But it works just fine ;)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Sphinn rocks

Thanks to Danny’s crew we’ve got a promising search geek community site. Since I’ve recently started to deal with invites, here is the top secret link where you get your free Sphinn invite. Click it now and join today, as Gorbachev said ‘those who are late will be punished by life itself’ ;)

Previous experiments revealed that my pamphlets aren’t diggworthy, despite the presence of OL/UL lists. Because I mention search and stuff like that every once in a while, I decided to submit a horror story to Sphinn to test the waters over there.

Adding Sphinn-it! widgets to my posts hopefully helps promoting Sphinn, but with Blogger that turned into kinda nightmare. To prevent you from jumping through infinite try-and-error hoops, here is how it works:

Classic templates:

Search for $BlogItemBody$ and below the </div> put

<script type='text/javascript'>submit_url='<$BlogItemPermalinkUrl$>';</script>
<script src=’http://sphinn.com/evb/button.php’ type=’text/javascript’/></script>

(Blogger freaks out when you omit the non-standard ;</script> after the self-closing second tag, hence stick with the intentional syntax error.)

Newish templates:

Check “Expand Widget Templates”

Search for data:post.body/ and below the </p> put

<b:if cond='data:post.url'>
<p><script type=’text/javascript’>submit_url=’<data:post.url/>’;</script>
<script src=’http://sphinn.com/evb/button.php’ type=’text/javascript’/></p>
</b:if>

(After saving the changes Blogger replaces some single quotes with HTML entities, but it works though. Most probably one could do that in a more elegant way, but once I saw the badges pointing to the correct URL –both in the posts and on the main page– I gave up.)

Have fun sphinning my posts!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google helps those who help themselves

And if that’s not enough to survive on Google’s SERPs, try Google’s Webmaster Forum where you can study Adam Lasnik’s FAQ which covers even questions the Webmaster Help Center provides no comprehensive answer for (yet), and where Googlers working in Google’s Search Quality, Webspam, and Webmaster Central teams hang out. Google dumps all sorts of questioners to the forum, where a crowd of hardcore volunteers (aka regulars as Google calls them) invests a lot of time to help out Webmasters and site owners facing problems with the almighty Google.

Despite the sporadic posts by Googlers, the backbone of Google’s Webmaster support channel is this crew of regulars from all around the globe. Google monitors the forum for input and trends, and intervenes when the periodic scandal escalates every once in a while. Apropos scandal … although the list of top posters mentions a few of the regulars, bear in mind that trolls come with a disgusting high posting cadency. Fortunately, currently the signal drowns the noise (again), and I appreciate very much that the Googlers participate more and more.

Some of the regulars like seo101 don’t reveal their URLs and stay anonymous. So here is an incomplete list of folks giving good advice:

If I’ve missed anyone, please drop me a line (I stole the list above from JLH and Red Cardinal, so it’s all their fault!).

So when you’re a Webmaster or site owner, don’t hesitate to post your Google related question (but read the FAQ before posting, and search for your topics), chances are one of these regulars or even a Googler offers assistance. Otherwise when you’re questionless carrying a swag of valuable answers, join the group and share your knowledge. Finally, when you’re a Googler, donate the sites linked above a boost on the SERPs ;)

Micro-meme started by John Honeck, supported by Richard Hearne, Bert Vierstra



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Now Powncing

John, thanks for the invite!Inspired by all the twits about pownce I submitted my email addy too. What a useless procedure. From inside there’s no list of submitted email addresses to pick friends from. Or I’m too blind to find that page.

Probably the best procedure to get rid of the 6 invites is to sell them at eBay. Perhaps Pownce releases 6 new invites then and I get rich quick. Wait … I’ve a better idea. Submit your honest review of this blog in the comments and send me the email addy for your invite. If your piece is funny or honest or vilifying enough to make me laugh I might invite you ;)

Ok, so what separates Pounce from Twitter and WS_FTP? Here are my first impressions.

Unfortunately, I will not see the ads, never. Hectic clicking on all links signed me up as a pro-member by accident. pro-crab Now Pownce blemishes my cute red crab with a “pro” label. I guess I got what I paid for. Paid? Yep, that’s the first difference, Pownce is not completely free. Spamming friends in 100 meg portions costs an annual fee of 20 bucks.

Next difference. There is no 140 bytes per message limit. Nice. And the “Send to” combo box is way more comfortable than the corresponding functionality at Twitter. I miss Twitter’s “command line options” like “d username” and “@username”. Sounds schizophrenic perhaps, but I’m just greedy.

I figured out how to follow someone without friending. Just add somebody as friend and (you don’t need to) wait for the decline, this makes you a fan of other users. You get their messages but not the other way round. Twitter’s “add as friend” and “follow user” is clearer I think.

Searching for the IM setup I learned there’s none. Pownce expert John said I’ve to try the desktop thingy but it looks like AIM 1999, so I refuse the download and stick with the Web interface until Pownce interacts with GTalk. The personal pounce page has a refresh link at least, but no auto-refresh like Twitter.

There’s no way to bookmark messages or threads yet, and the link to the particular messages is somewhat obfuscated. The “email a bug report” is a good replacement for a “beta” label. I guess I’ll use it to tell Pownce that I hate their link manipulation applying rel-nofollow crap. I’ll play with the other stuff later on, the daddy-cab is due at the kindergarden. Hopefully, when I return, there will be a Pownce badge available for this blog, I’ve plenty of white space left on my sidebar.


Back, still no badge, but I realized that I forgot to mention the FTP similarities. And there is no need to complete this post, since I found Tamar’s brilliant Twitter vs. Pownce article.

Update: How to post to Twitter and Pownce at the same time (a Twitterfeed work around, I didn’t test this configuration)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28  Next Page »