I want more Jaggers!

Jagger-1 was good to me, and what I’ve seen from Jagger-2 pleases me even more. I can’t wait for Jagger-3! Dear folks at Google, please continue the Jagger series and roll out a new Jagger weekly, I’d love to see my Google traffic doubling every week! In return I’ll double the time I’m spending on Google user support in your groups. Thanks in advance!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Smart Web Site Architects Provide Meaningful URLs

From a typical forum thread on user/search engine friendly Web site design:

Question: Should I provide meaningful URLs carrying keywords and navigational information?

Answer 1: Absolutely, if your information architecture and its technical implementation allow the use of keyword rich hyphened URLs.

Answer 2: Bear in mind that URLs are unchangeable, thus first consider to develop a suitable information architecture and a flexible Web site structure. You’ll learn that folders and URLs are the last thing to think of.

Question: WTF do you mean?

Answer: Here you go, it makes no sense to paint a house before the architect has finished the blueprints.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

MySQL’s ODBC Driver 3.51 drives me nuts

ODBC drivers can drive me crazy. Especially when the last thing I’m looking at is the ODBC driver, because I thought this darn thing is a well developed and tested open source piece.

A site I’m working on collects log data in a MySQL table, counting page views per landing page, referrer page and SE search terms. The stats are nice, but pretty much useless, because with such a structure it’s hard to create summaries and keyword analyses.

Luckily Progress OpenEdge was available, so I thought it should be possible to read the MySQL table via ODBC from the Web server, creating all reports with Progress, which has a great temp-table support, amazing fast word indexing, and can handle billions of large records with ease.

Well, I’ve downloaded, installed and configured the MySQL ODBC Driver 3.51, and did a successful connection test. So far so nice, but now the nightmare began. With Progress I couldn’t create the ODBC data server instance, and as always the error messages were misleading.

To make a long story short, the current MySQL ODBC driver lacks so much functionality that it cannot work. The answer is buried in the PEG mailing list archive which is not fully indexed by Google. Gus Bjorklund from Progress Software states “The ODBC dataserver will not work due to a variety of functions not implemented in the MySQL ODBC driver … As people who have tried it have discovered, MySQL does not yet have a complete enough implementation of the SQL DML”.

Frustrating. Back to the stone age. Oups. Transferring table dumps failed due to the large amount of data. Aaahhhrrrggg. Developing a Web service in PHP sending selected data in handy batches makes my day.

However, does anybody has (heard of) a working ODBC driver for MySQL?

Tags:



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Duplicate Content Filters are Sensitive Plants

In their ever lasting war on link and index spam search engines produce way too much collateral damage. Especially hierarchically structured content suffers from over-sensitive spam filters. The crux at this is, that user friendly pages need to duplicate information from upper levels. The old rule “what’s good for users will be honored by the engines” no longer applies.

In fact the problem is not the legitimate duplication of key information from other pages, the problem is that duplicate content filters are sensitive plants not able to distinguish useful repetition from automated generation of artificial spider fodder. The engines won’t lower their spam threshold, that means they will not fix this persistent bug in the near future, so Web site owners have to live with decreasing search engine traffic, or react. The question is, what can a Webmaster do to escape the dilemma without converting the site to a useless nightmare for visitors, because all textual redundancies were eliminated?

The major fault of Google’s newer dupe filters is, that their block level analysis often fails in categorizing page areas. Web page elements in and near the body area, which contain duplicated key information from upper levels, are treated as content blocks, not as part of the page template where they logically belong to. As long as those text blocks reside in separated HTML block level elements, it should be quite easy to rearrange those elements in a way that the duplicated text becomes part of the page template, what should be safe at least with somewhat intelligent dupe filters.

Unfortunately, very often the raw data aren’t normalized, for example the text duplication happens within a description field in a database’s products table. That’s a major design flaw, and it must be corrected in order to manipulate block level elements properly, that is to declare them as part of the template vs. part of the page body.

My article Feed Duplicate Content Filters Properly explains a method to revamp page templates of eCommerce sites on the block level. The principle outlined there can be applied to other hierarchical content structures too.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

New Google Dupe Filters?

Folks at WebmasterWorld, ThreadWatch and other hang-outs discuss a new duplicate content filter from Google. This odd thing seems to wipe out the SERPs, producing way more collateral damage than any other filter known to SEOs.

From what I’ve read, all threads contentrate on on-page and on-site factors trying to find a way out of Google’s trash can. I admit that on-page/site factors like near-duplicates produced with copy, paste and modify operations or excessive quoting can trigger duplicate content filters. But I don’t buy that’s the whole story.

If a fair amount of the vanished sites mentioned in the discussions are rather large, those sites probably are dedicated to popular themes. Popular themes are subject of many Web sites. The amount of unique information on popular topics isn’t infinite. That is, many Web sites provide the same piece of information. The wording may be different, but there are only so many ways to rewrite a press release. The core information is identical, making many pages considered near-duplicates, and inserting longer quotes even duplicates text snippets or blocks.

Semantic block analysis of Web pages is not a new thing. What if Google just bought a few clusters of new machines, now applying well known filters on a broader set of data? This would perfectly explain why a year ago four very similar pages all ranked fine, then three of four disappeard, and since yesterday all four are gone, because the page having the source bonus resides on a foreign Web site. To come to this conclusion, just expand the scope of the problem analysis to the whole Web. This makes sense, since Google says “Google’s mission is to organize the world’s information”.

Read more here: Thoughts on new Duplicate Content Issues with Google.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Yahoo! Site Explorer Finally Launched

Finally the Yahoo! Site Explorer (BETA) got launched. It’s a nice tool showing a site owner and the competitors all indexed pages per domain, and it offers subdomain filters. Inbound links get counted per page and per site. The tool provides links to the standard submit forms. Yahoo! accepts mass submissions of plain URL lists here.

The number of inbound links seems to be way more accurate than the guessings available from linkdomain: and link: searches. Unfortunately there is no simple way to exclude internal links. So if one wants to check only 3rd party inbounds, a painfull procedure begins:
1. Export of each result page to TSV files, that’s a tab delimited format, readable by Excel and other applications.
2. The export goes per SERP with a maximum of 50 URLs, so one must delete the two header lines per file and append file by file to produce one sheet.
3. Sorting the work sheet by the second column gives a list ordered by URL.
4. Deleting all URLs from the own site gives the list of 3rd party inbounds.
5. Wait for the bugfix “exported data of all result pages are equal” (each exported data set contains the first 50 results, regardless from which result page one clicks the export link).

The result pages provide assorted lists of all URLs known to Yahoo. The ordering does not represent the site’s logical structure (defined by linkage), not even the physical structure seems to be part of the sort order (that’s not exactly what I would call a “comprehensive site map”). It looks like the first results are ordered by popularity, followed by a more or less unordered list. The URL listings contain fully indexed pages, with known but not (yet) indexed URLs mixed in (e.g. pages with a robots “noindex” meta tag). The latter can be identified by the missing cached link.

Desired improvements:
1. A filter “with/without internal links”.
2. An export function outputting the data of all result pages to one single file.
3. A filter “with/without” known but not indexed URLs.
4. Optional structural ordering on the result pages.
5. Operators like filetype: and -site:domain.com.
6. Removal of the 1,000 results limit.
7. Revisiting of submitted URL lists a la Google sitemaps.

Overall, the site explorer is a great tool and an appreciated improvement, despite the wish list above. The most interesting part of the new toy is its API, which allows querying for up to 1,000 results (page data or link data) in batches of 50 to 100 results, returned in a simple XML format (max. 5,000 queries per IP address per day).

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google’s Master Plan

Alerted by Nick I had the chance to take a look at Google’s current master plan. Niall Kennedy “took some shots of the Google master plan. There is a long set of whiteboards next to the entrance to one of the Google buildings. The master plan is like a wiki: there is an eraser and a set of pens at the end of the board for people to edit and contribute to the writing on the wall.”

Interesting to see that “Directory” is not yet checked. Does this indicate that Google has plans to build its own? Unchecked items like “Diet”, “Mortgages” and “Real Estate” make me wonder what kind of services for those traditional spammy areas Google hides in its pipeline. The red dots or quotes cramping “Dating” may indicate that maps, talk, mail and search get a new consolidated user interface soon. The master plan also reveals that they’ve hired Vint Cerf to develop a world wide dark fiber/WiFi next generation web based on a redesigned TCP/IP and HTTP protocol.

Is all that beyond belief? Perhaps, perhaps not, but food for thoughts at any rate, if the shots are for real and not part of a funny disinformation campaign. Go study the plan and speculate yourself.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Search Engine Friendly Cloaking

Yesterday I had a discussion with a potential client who liked me to optimize the search engine crawler support on a fairly large dynamic Web site. A moment before he hitted submit on my order form, I stressed the point that his goals aren’t achievable without white hat cloaking. He is pretty much concerned about cloaking, and that’s understandable with regard to the engine’s webmaster guidelines and the cloaking hysteria across the white hat message boards.

To make a long story short, I’m a couple hours ahead of his local time and at 2:00am I wasn’t able to bring my point home. Probably I’ve lost the contract, what is not a bad thing, because obviously I’ve produced a communication problem resulting in lost confidence. To get the best out of it, after a short sleep I’ve written down what I should have told him.

Here is my tiny guide to search engine friendly cloaking. The article explains a search engine’s view on cloaking, provides evidence on tolerated cloaking, and gives some examples of white hat cloaking which is pretty much appreciated by the engines:

  • Truncating session IDs and similar variable/value pairs in query strings
  • Reducing the number of query string arguments
  • Stripping affiliate IDs and referrer identifiers
  • Preventing search engines from indexing duplicated content

I hope it’s a good read, and perhaps it helps me out next time I’ve to explain good cloaking.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google’s Blog Search Released

Spotted by SEW and TW, Google is the first major search engine providing a real feed and blog search service.

Google’s new feed search service covers all kinds of XML feeds, not only blogs, but usually no news feeds. So what can you do to get your non-blog and non-news feeds included? As discussed here, you need to ping services like pingomatic, since Google doesn’t offer a ping service.

‘Nuff said, I’m off to play with the new toy. Lets see whether I can feed it with a nice amount of neat stuff I’ve in the works waiting for the launch:)

[Update: This post appeared 14 minutes after uploading in Google’s blog search results - awesome!]

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

About Repetition in Web Site Navigation

Rustybrick runs a post on Secondary Navigation Links are Recommended, commenting a WMW thread titled Duplicate Navigation Links Downsides. While in the thread at WMW the main concern is content duplication (not penalized in navigation elements as Rustybrick and several contributors point out), the nuggets are provided by Search Engine Roundtable stating “having two of the same link, pointing to the same page, and if it is of use to the end user, will not hurt your rankings. In fact, they may help with getting your site indexed and ranking you higher (due to the anchor text)”. I think this statement is worth a few thoughts, because its underlying truth is more complex than it sounds at the first sight.

Thesis 1: Repeating the code of the topmost navigation at the page’s bottom is counter productive
Why? Every repetition of link blocks devalues their weight assigned by search engines. That goes for on-the-page duplication as well as for section-wide or especially site-wide repetition. One (or max. two) link(s) to upper levels is(are) enough, because providing too many off-topic-while-on-theme-links dilute the topical authority of the node and devaluate its linking power with regard to topic authority.
Solution: Make use of user friendly and search engine unfriendly menus at the top of the page, then put the vertical links leading to main sections and the root at the very bottom (a naturally cold zone with next to zero linking power). In the left- or right-handed navigation link to the next upper level, link the path to the root in breadcrumbs only.

Thesis 2: Passing PageRank™ works different from passing topical authority via anchor text
While every link (internal or external) passes PageRank™ (with duplicated links probably less than with unique links caused by a dampening factor), topical authority passed via anchor text is subject of a block specific weighting. As more a navigation element gets duplicated, as less topical reputation it will pass with its links. That means that anchor text in site-wide navigation elements and templated page areas is totally and utterly useless.
Solution: Use different anchor text in bread crumbs and menu items, and don’t repeat menus.

Summary:
1. All navigational links help with indexing, at least with crawling, but not all links help with ranking.
2. (Not too often) repeated links in navigation elements with different anchor text help with rankings.
3. Links in hot zones like bread crumbs at the top of a page as well as links within the body text perfectly boost SERP placements, because they pass topical reputation. Links in cold zones like in bottom lines or duplicated navigation elements are user friendly, but don’t boost SERP positionining that much, because their one and only effect is PageRank™ distribution to a pretty low degree.

Read more on this topic here.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29  Next Page »