Archived posts from the 'SEO' Category

WMW Gem - Don’t optimize for keywords

Desperate optimizing for paricular keyword phrases can kill way better converting SE traffic for natural search terms.

In a usually pretty useless Google-Update-Thread MHes provides a real gem, start reading with message 183 of “Dealing With Consequences of Jagger Update” in the Google forum. If you want to hear it from the horses mouth, then listen to this Matt Cutts interview at Webmaster Radio, spotted via TreadWatch.

Actually that’s not a new thing. Writing a (longer) natural copy triggers more search queries than a (short) page heavily targeting the apparently money term harvested from various keyword research tools. It’s a good idea to support longer pages with short pieces highlighting particular terms, e.g. footnote pages, glossary pages and so on, but the large page usually generates the most sales.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

How to get trusted inbound links

Post Jagger the vital question is how a Web site can acquire trusted authority links. Well, I can’t provide the definitive answer, but a theory and perhaps a suitable methodology.

Mutate from a link monkey to a link ninja. Follow Google’s approach to identify trustworthy resources. Learn to spot sources of TrustRank, then work hard to attract their attention (by providing outstanding content for example). Don’t bother with link requests, be creative instead. Investing a few days or even weeks to gain a trusted inbound link is worth the efforts. Link quality counts, quantity may even be harmful.

Something to start with: DMOZ –in parts– has a high TrustRank, but a DMOZ link alone may harm, because Google knows that a handful of editors aren’t that honest. A Yahoo listing can be used to support an established site having trusted inbound links already, but alone or together with an ODP link it may hurt too, because it’s that easy to get.

Other sites with a high TrustRank are Google.com and other domains owned by Google like their blogs (tough, but not impossible to get a link from Google), W3C.org, most pages on .edu and .gov domains, your local chamber of commerce, most newspapers … just to give a few examples.

I bet Matt Cutt’s blog OTOH has a pretty low TrustRank, because he is obviously part of a ‘very bad neighborhood’, albeit his very honorable intentions. Also the SEO community including various stealthy outlets is a place to avoid if you’re hunting trusted links.

More information: How to Gain Trusted Connectivity

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

How to escape Google’s ‘Sandbox’

Matt Cutt’s recent Pubcon talk on the infamous ‘Sandbox’ not only cleared the myth. The following discussion at Threadwatch, Webmasterworld (paid) and many other hangouts revealed some gems, summed up by Andy Hagans: it’s all about trust.

The ‘Sandbox’ is not an automated aging delay, it’s not a penalty for optimized sites, it’s not an inbound link counter over the time axis, just to name a few of the theories floating around. The ‘Sandbox’ is simply the probation period Google needs to gather TrustRank™ and to evaluate its quality guidelines.

To escape the ‘Sandbox’ a new Web site needs trusted authority links, amplified and distributed by clever internal linkage, and a critical mass of trustworthy, unique, and original content. Enhancing useability and crawler friendliness helps too. IOW, back to the roots.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

An Inofficial FAQ on Google Sitemaps

Yesterday I’ve launched the inofficial Google Sitemaps FAQ. It’s not yet complete, but it may be helpful if you’ve questions like

I’ve tried to answer questions which Google cannot or will not answer, or where Google would get murdered for any answer other than “42″. Even if the answer is a plain “no”, I’ve provided backgrounds, and solutions. I’m not a content thief, and I hate useless redundancies, so don’t miss out on the official FAQ and the sitemaps blog.

Enjoy, and submit interesting questions here. Your feedback is very much appreciated!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Reciprocal links are not penalized by Google

Recently, reciprocal linking at all is accused to tank a Web sites’ placement in Google’s search results. Despite the fact that it’s way too early for a serious post-Jagger-analysis, the current hype on oh sooo bad reciprocal links is a myth IMHO.

What Google is after are artificial link schemes, that includes massive reciprocal linkage appearing simultaneously. That’s not a new thing. What Google still honors, is content driven, natural, on-topic reciprocal linkage.

Simplified, Google has a huge database of the Web’s linkage data, where each and every link has a timestamp, plus an ID of source and destination page, and site. A pretty simple query reveals a reciprocal link campaign and other systematic link patterns as well. Again, that’s not new. The Jagger update may have tanked more sites involved in artificial linkage because Google has assigned more resources to link analysis, but that does not mean that Google dislikes reciprocal linking at all.

Outgoing links to related pages do attract natural reciprocal links over time, even without an agreement. Those links still count as legitimate votes. Don’t push the panic button, think!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Smart Web Site Architects Provide Meaningful URLs

From a typical forum thread on user/search engine friendly Web site design:

Question: Should I provide meaningful URLs carrying keywords and navigational information?

Answer 1: Absolutely, if your information architecture and its technical implementation allow the use of keyword rich hyphened URLs.

Answer 2: Bear in mind that URLs are unchangeable, thus first consider to develop a suitable information architecture and a flexible Web site structure. You’ll learn that folders and URLs are the last thing to think of.

Question: WTF do you mean?

Answer: Here you go, it makes no sense to paint a house before the architect has finished the blueprints.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Duplicate Content Filters are Sensitive Plants

In their ever lasting war on link and index spam search engines produce way too much collateral damage. Especially hierarchically structured content suffers from over-sensitive spam filters. The crux at this is, that user friendly pages need to duplicate information from upper levels. The old rule “what’s good for users will be honored by the engines” no longer applies.

In fact the problem is not the legitimate duplication of key information from other pages, the problem is that duplicate content filters are sensitive plants not able to distinguish useful repetition from automated generation of artificial spider fodder. The engines won’t lower their spam threshold, that means they will not fix this persistent bug in the near future, so Web site owners have to live with decreasing search engine traffic, or react. The question is, what can a Webmaster do to escape the dilemma without converting the site to a useless nightmare for visitors, because all textual redundancies were eliminated?

The major fault of Google’s newer dupe filters is, that their block level analysis often fails in categorizing page areas. Web page elements in and near the body area, which contain duplicated key information from upper levels, are treated as content blocks, not as part of the page template where they logically belong to. As long as those text blocks reside in separated HTML block level elements, it should be quite easy to rearrange those elements in a way that the duplicated text becomes part of the page template, what should be safe at least with somewhat intelligent dupe filters.

Unfortunately, very often the raw data aren’t normalized, for example the text duplication happens within a description field in a database’s products table. That’s a major design flaw, and it must be corrected in order to manipulate block level elements properly, that is to declare them as part of the template vs. part of the page body.

My article Feed Duplicate Content Filters Properly explains a method to revamp page templates of eCommerce sites on the block level. The principle outlined there can be applied to other hierarchical content structures too.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

New Google Dupe Filters?

Folks at WebmasterWorld, ThreadWatch and other hang-outs discuss a new duplicate content filter from Google. This odd thing seems to wipe out the SERPs, producing way more collateral damage than any other filter known to SEOs.

From what I’ve read, all threads contentrate on on-page and on-site factors trying to find a way out of Google’s trash can. I admit that on-page/site factors like near-duplicates produced with copy, paste and modify operations or excessive quoting can trigger duplicate content filters. But I don’t buy that’s the whole story.

If a fair amount of the vanished sites mentioned in the discussions are rather large, those sites probably are dedicated to popular themes. Popular themes are subject of many Web sites. The amount of unique information on popular topics isn’t infinite. That is, many Web sites provide the same piece of information. The wording may be different, but there are only so many ways to rewrite a press release. The core information is identical, making many pages considered near-duplicates, and inserting longer quotes even duplicates text snippets or blocks.

Semantic block analysis of Web pages is not a new thing. What if Google just bought a few clusters of new machines, now applying well known filters on a broader set of data? This would perfectly explain why a year ago four very similar pages all ranked fine, then three of four disappeard, and since yesterday all four are gone, because the page having the source bonus resides on a foreign Web site. To come to this conclusion, just expand the scope of the problem analysis to the whole Web. This makes sense, since Google says “Google’s mission is to organize the world’s information”.

Read more here: Thoughts on new Duplicate Content Issues with Google.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Search Engine Friendly Cloaking

Yesterday I had a discussion with a potential client who liked me to optimize the search engine crawler support on a fairly large dynamic Web site. A moment before he hitted submit on my order form, I stressed the point that his goals aren’t achievable without white hat cloaking. He is pretty much concerned about cloaking, and that’s understandable with regard to the engine’s webmaster guidelines and the cloaking hysteria across the white hat message boards.

To make a long story short, I’m a couple hours ahead of his local time and at 2:00am I wasn’t able to bring my point home. Probably I’ve lost the contract, what is not a bad thing, because obviously I’ve produced a communication problem resulting in lost confidence. To get the best out of it, after a short sleep I’ve written down what I should have told him.

Here is my tiny guide to search engine friendly cloaking. The article explains a search engine’s view on cloaking, provides evidence on tolerated cloaking, and gives some examples of white hat cloaking which is pretty much appreciated by the engines:

  • Truncating session IDs and similar variable/value pairs in query strings
  • Reducing the number of query string arguments
  • Stripping affiliate IDs and referrer identifiers
  • Preventing search engines from indexing duplicated content

I hope it’s a good read, and perhaps it helps me out next time I’ve to explain good cloaking.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

About Repetition in Web Site Navigation

Rustybrick runs a post on Secondary Navigation Links are Recommended, commenting a WMW thread titled Duplicate Navigation Links Downsides. While in the thread at WMW the main concern is content duplication (not penalized in navigation elements as Rustybrick and several contributors point out), the nuggets are provided by Search Engine Roundtable stating “having two of the same link, pointing to the same page, and if it is of use to the end user, will not hurt your rankings. In fact, they may help with getting your site indexed and ranking you higher (due to the anchor text)”. I think this statement is worth a few thoughts, because its underlying truth is more complex than it sounds at the first sight.

Thesis 1: Repeating the code of the topmost navigation at the page’s bottom is counter productive
Why? Every repetition of link blocks devalues their weight assigned by search engines. That goes for on-the-page duplication as well as for section-wide or especially site-wide repetition. One (or max. two) link(s) to upper levels is(are) enough, because providing too many off-topic-while-on-theme-links dilute the topical authority of the node and devaluate its linking power with regard to topic authority.
Solution: Make use of user friendly and search engine unfriendly menus at the top of the page, then put the vertical links leading to main sections and the root at the very bottom (a naturally cold zone with next to zero linking power). In the left- or right-handed navigation link to the next upper level, link the path to the root in breadcrumbs only.

Thesis 2: Passing PageRank™ works different from passing topical authority via anchor text
While every link (internal or external) passes PageRank™ (with duplicated links probably less than with unique links caused by a dampening factor), topical authority passed via anchor text is subject of a block specific weighting. As more a navigation element gets duplicated, as less topical reputation it will pass with its links. That means that anchor text in site-wide navigation elements and templated page areas is totally and utterly useless.
Solution: Use different anchor text in bread crumbs and menu items, and don’t repeat menus.

Summary:
1. All navigational links help with indexing, at least with crawling, but not all links help with ranking.
2. (Not too often) repeated links in navigation elements with different anchor text help with rankings.
3. Links in hot zones like bread crumbs at the top of a page as well as links within the body text perfectly boost SERP placements, because they pass topical reputation. Links in cold zones like in bottom lines or duplicated navigation elements are user friendly, but don’t boost SERP positionining that much, because their one and only effect is PageRank™ distribution to a pretty low degree.

Read more on this topic here.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13  Next Page »