Archived posts from the 'Web development' Category

Duplicate Content Filters are Sensitive Plants

In their ever lasting war on link and index spam search engines produce way too much collateral damage. Especially hierarchically structured content suffers from over-sensitive spam filters. The crux at this is, that user friendly pages need to duplicate information from upper levels. The old rule “what’s good for users will be honored by the engines” no longer applies.

In fact the problem is not the legitimate duplication of key information from other pages, the problem is that duplicate content filters are sensitive plants not able to distinguish useful repetition from automated generation of artificial spider fodder. The engines won’t lower their spam threshold, that means they will not fix this persistent bug in the near future, so Web site owners have to live with decreasing search engine traffic, or react. The question is, what can a Webmaster do to escape the dilemma without converting the site to a useless nightmare for visitors, because all textual redundancies were eliminated?

The major fault of Google’s newer dupe filters is, that their block level analysis often fails in categorizing page areas. Web page elements in and near the body area, which contain duplicated key information from upper levels, are treated as content blocks, not as part of the page template where they logically belong to. As long as those text blocks reside in separated HTML block level elements, it should be quite easy to rearrange those elements in a way that the duplicated text becomes part of the page template, what should be safe at least with somewhat intelligent dupe filters.

Unfortunately, very often the raw data aren’t normalized, for example the text duplication happens within a description field in a database’s products table. That’s a major design flaw, and it must be corrected in order to manipulate block level elements properly, that is to declare them as part of the template vs. part of the page body.

My article Feed Duplicate Content Filters Properly explains a method to revamp page templates of eCommerce sites on the block level. The principle outlined there can be applied to other hierarchical content structures too.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

About Repetition in Web Site Navigation

Rustybrick runs a post on Secondary Navigation Links are Recommended, commenting a WMW thread titled Duplicate Navigation Links Downsides. While in the thread at WMW the main concern is content duplication (not penalized in navigation elements as Rustybrick and several contributors point out), the nuggets are provided by Search Engine Roundtable stating “having two of the same link, pointing to the same page, and if it is of use to the end user, will not hurt your rankings. In fact, they may help with getting your site indexed and ranking you higher (due to the anchor text)”. I think this statement is worth a few thoughts, because its underlying truth is more complex than it sounds at the first sight.

Thesis 1: Repeating the code of the topmost navigation at the page’s bottom is counter productive
Why? Every repetition of link blocks devalues their weight assigned by search engines. That goes for on-the-page duplication as well as for section-wide or especially site-wide repetition. One (or max. two) link(s) to upper levels is(are) enough, because providing too many off-topic-while-on-theme-links dilute the topical authority of the node and devaluate its linking power with regard to topic authority.
Solution: Make use of user friendly and search engine unfriendly menus at the top of the page, then put the vertical links leading to main sections and the root at the very bottom (a naturally cold zone with next to zero linking power). In the left- or right-handed navigation link to the next upper level, link the path to the root in breadcrumbs only.

Thesis 2: Passing PageRank™ works different from passing topical authority via anchor text
While every link (internal or external) passes PageRank™ (with duplicated links probably less than with unique links caused by a dampening factor), topical authority passed via anchor text is subject of a block specific weighting. As more a navigation element gets duplicated, as less topical reputation it will pass with its links. That means that anchor text in site-wide navigation elements and templated page areas is totally and utterly useless.
Solution: Use different anchor text in bread crumbs and menu items, and don’t repeat menus.

Summary:
1. All navigational links help with indexing, at least with crawling, but not all links help with ranking.
2. (Not too often) repeated links in navigation elements with different anchor text help with rankings.
3. Links in hot zones like bread crumbs at the top of a page as well as links within the body text perfectly boost SERP placements, because they pass topical reputation. Links in cold zones like in bottom lines or duplicated navigation elements are user friendly, but don’t boost SERP positionining that much, because their one and only effect is PageRank™ distribution to a pretty low degree.

Read more on this topic here.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Link Tutorial for Web Developers

I’ve just finished an article on hyperlinks, here is the first draft:
Anatomy and Deployment of Links

The targeted audience are developers and software architects, folks who usually aren’t that familiar with search engine optimizing and the usability aspects of linkage. Overview:

Defining Links, Natural Linking and Artificial Linkage
I’m starting with a definition of Link and its most important implementations as Natural Link and Artificial Link.

Components of a Link I. [HTML Element: A]
That’s the first anatomic chapter, a commented text- and image-link compendium explaining proper linking on syntax examples. Each attribute of the Anchor element is described along with usage tips and lists of valid values.

Components of a Link II. [HTML Element: LINK]
Based on the first anatomic part, here comes a syntax compendium of the LINK element, used in the HEAD section to define relationships, assign stylesheets, enhance navigation etc.

Web Site Structuring
Since links connect structural elements of a Web site, it makes sense to have a well thought out structure. I’m discussing poor and geeky structures which confuse the user, followed by the introduction of universal nodes and topical connectors, which solve a lot of weaknesses when it comes to topical interlinking of related pages. I’ve tried to popularize the parts on object modeling, thus OOAD purists will probably hit me hard on this piece, while (hopefully) Webmasters can follow my thoughts with ease. This chapter closes the structural part with a description of internal authority hubs.

A Universal Node’s Anchors and their Link Attributes
Based on the structural part, I’m discussing the universal node’s attributes like its primary URI, anchor text and tooltip. The definition of topical anchors is followed by tips on identifying and using alternate anchors, titles, descriptions etc. in various inbound and outbound links.

Linking is All About Popularity and Authority
Well, it should read ‘linking is all about traffic’, but learning more about the backgrounds of natural linkage helps to understand the power and underlying messages of links, which produce indirect traffic. Well linked and outstanding authority sites will become popular by word of mouth. The search engines will follow their users’ votes intuitionally, generating loads of targeted traffic.

Optimizing Web Site Navigation
This chapter is not so much focused on usability, instead I discuss a search engine’s view on site wide navigation elements and tell how to optimize those for the engines. To avoid repetition, I’m referring to my guide on crawler support and other related articles, so this chapter is not a guide on Web site navigation at all.

Search Engine Friendly Click Tracking
Traffic monitoring and traffic management influences a site’s linkage, often to the worst. Counting outgoing traffic per link works quite fine without redirecting scripts, which cause all kind of troubles with search engines and some user agents. I’m outlining an alternative method to track clicks, ready to use source code included.

I’ve got a few notes on the topic left behind, so most probably I’ll add more stuff soon. I hope it’s a good reading, and helpful. Your feedback is very much appreciated:)

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Just Another Free Sitemap Tool Launched

FREE (Google) Sitemap Tool for Smaller Web Sites I get a lot of feedback on my Google Sitemaps Tutorial and related publications. I read the message boards and newsgroups. I’ve learned that there are lots of smaller Web sites out there, where the site owner wants to provide both a Google XML Sitemap and a HTML site map, but there are close to zero tools available to support those Web publishers. At least the suitable tools are not free of charge, respectively most low-cost content management systems don’t create both sitemap variants.

To help out those Web site owners, I’ve written a pretty simple PHP script generating dynamic Google XML Sitemaps as well as pseudo-static HTML site maps from one set of page data. Both the XML sitemap and the viewable version pull their data from a plain text file, where the site owner or Web designer adds a new a line per page after updates.

The Google XML Sitemap is a PHP script reflecting the current text files’s content on request. It writes a static HTML site map page to disk. Since Googlebot downloads XML site maps every 12 hours like a clockwork, the renderable sitemap gets refreshed at least twice per day.

The site owner or Web designer just needs to change a simple text file on updates, and after the upload Googlebot recreates the sitemaps. Ain’t that cute?

Curious? Here is the link: Simple Sitemaps 1.0 BETA

Although this free script provides a pretty simple sitemap solution, I wouldn’t use it with Web sites containing more than 100 pages. Why not? Site map pages carrying more than 100 links may devalue the links. On the average Web server my script will work with hundreds of pages, but from a SEOs point of view that’s counter productive.

Please download the script and tell me what You think. Thanks!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Revamping Framed Web Sites

Phoenix posted a great tip at SEO Roundtable:
Creating a Framed Site Without The Drawbacks to SEO

With regard to search engine crawling and indexing, frames are the SEO’s nightmare. Some brilliant people have taken the time to develop a CSS solution for fixed site navigation, examples:
http://www.stunicholls.myby.co.uk/layouts/frame.html
http://jfy.homeunix.net/misc/example.html

Here is the tutorial:
http://www.webreference.com/html/tutorial24/
Summary: “HTML Frames have been used so far on the Web to provide sections of a Web page that scroll independantly of each other, but they cause a lot of hassle, making linking difficult and breaking the consistency of our documents. CSS fixed positioning helps us work around this by positioning parts of one document relative to the viewport. The overflow property can be used to control their scrolling appropriately. By being careful about how you position these elements, you can have your layout fall back to the default rendering on Navigator 4 and Internet Explorer, making this technique useable in a production environment.”

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5