Archived posts from the 'Link Building' Category

Link building tips for small business sites

Everybody is talking about link baiting, but it’s hard to create a good link bait when you have no noticed voice. I thought it might be useful to repeat some well established techniques of link acquisition, that is link building tips for small sites with a tiny marketing budget:

The value of links from a search engine’s perspective

Question: What is a link worth with regard to search engine rankings, what kind of links should I hunt for if I’m not the WSJ, and where do I have a realistic chance to get linked?

Answer (summary): Valuable links generate human traffic, trafficked sites do get a ranking boost. The article provides examples and explains why some inbound links are utterly useless.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

How to get trusted inbound links

Post Jagger the vital question is how a Web site can acquire trusted authority links. Well, I can’t provide the definitive answer, but a theory and perhaps a suitable methodology.

Mutate from a link monkey to a link ninja. Follow Google’s approach to identify trustworthy resources. Learn to spot sources of TrustRank, then work hard to attract their attention (by providing outstanding content for example). Don’t bother with link requests, be creative instead. Investing a few days or even weeks to gain a trusted inbound link is worth the efforts. Link quality counts, quantity may even be harmful.

Something to start with: DMOZ –in parts– has a high TrustRank, but a DMOZ link alone may harm, because Google knows that a handful of editors aren’t that honest. A Yahoo listing can be used to support an established site having trusted inbound links already, but alone or together with an ODP link it may hurt too, because it’s that easy to get.

Other sites with a high TrustRank are Google.com and other domains owned by Google like their blogs (tough, but not impossible to get a link from Google), W3C.org, most pages on .edu and .gov domains, your local chamber of commerce, most newspapers … just to give a few examples.

I bet Matt Cutt’s blog OTOH has a pretty low TrustRank, because he is obviously part of a ‘very bad neighborhood’, albeit his very honorable intentions. Also the SEO community including various stealthy outlets is a place to avoid if you’re hunting trusted links.

More information: How to Gain Trusted Connectivity

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

How to escape Google’s ‘Sandbox’

Matt Cutt’s recent Pubcon talk on the infamous ‘Sandbox’ not only cleared the myth. The following discussion at Threadwatch, Webmasterworld (paid) and many other hangouts revealed some gems, summed up by Andy Hagans: it’s all about trust.

The ‘Sandbox’ is not an automated aging delay, it’s not a penalty for optimized sites, it’s not an inbound link counter over the time axis, just to name a few of the theories floating around. The ‘Sandbox’ is simply the probation period Google needs to gather TrustRank™ and to evaluate its quality guidelines.

To escape the ‘Sandbox’ a new Web site needs trusted authority links, amplified and distributed by clever internal linkage, and a critical mass of trustworthy, unique, and original content. Enhancing useability and crawler friendliness helps too. IOW, back to the roots.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Reciprocal links are not penalized by Google

Recently, reciprocal linking at all is accused to tank a Web sites’ placement in Google’s search results. Despite the fact that it’s way too early for a serious post-Jagger-analysis, the current hype on oh sooo bad reciprocal links is a myth IMHO.

What Google is after are artificial link schemes, that includes massive reciprocal linkage appearing simultaneously. That’s not a new thing. What Google still honors, is content driven, natural, on-topic reciprocal linkage.

Simplified, Google has a huge database of the Web’s linkage data, where each and every link has a timestamp, plus an ID of source and destination page, and site. A pretty simple query reveals a reciprocal link campaign and other systematic link patterns as well. Again, that’s not new. The Jagger update may have tanked more sites involved in artificial linkage because Google has assigned more resources to link analysis, but that does not mean that Google dislikes reciprocal linking at all.

Outgoing links to related pages do attract natural reciprocal links over time, even without an agreement. Those links still count as legitimate votes. Don’t push the panic button, think!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Smart Web Site Architects Provide Meaningful URLs

From a typical forum thread on user/search engine friendly Web site design:

Question: Should I provide meaningful URLs carrying keywords and navigational information?

Answer 1: Absolutely, if your information architecture and its technical implementation allow the use of keyword rich hyphened URLs.

Answer 2: Bear in mind that URLs are unchangeable, thus first consider to develop a suitable information architecture and a flexible Web site structure. You’ll learn that folders and URLs are the last thing to think of.

Question: WTF do you mean?

Answer: Here you go, it makes no sense to paint a house before the architect has finished the blueprints.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Link Tutorial for Web Developers

I’ve just finished an article on hyperlinks, here is the first draft:
Anatomy and Deployment of Links

The targeted audience are developers and software architects, folks who usually aren’t that familiar with search engine optimizing and the usability aspects of linkage. Overview:

Defining Links, Natural Linking and Artificial Linkage
I’m starting with a definition of Link and its most important implementations as Natural Link and Artificial Link.

Components of a Link I. [HTML Element: A]
That’s the first anatomic chapter, a commented text- and image-link compendium explaining proper linking on syntax examples. Each attribute of the Anchor element is described along with usage tips and lists of valid values.

Components of a Link II. [HTML Element: LINK]
Based on the first anatomic part, here comes a syntax compendium of the LINK element, used in the HEAD section to define relationships, assign stylesheets, enhance navigation etc.

Web Site Structuring
Since links connect structural elements of a Web site, it makes sense to have a well thought out structure. I’m discussing poor and geeky structures which confuse the user, followed by the introduction of universal nodes and topical connectors, which solve a lot of weaknesses when it comes to topical interlinking of related pages. I’ve tried to popularize the parts on object modeling, thus OOAD purists will probably hit me hard on this piece, while (hopefully) Webmasters can follow my thoughts with ease. This chapter closes the structural part with a description of internal authority hubs.

A Universal Node’s Anchors and their Link Attributes
Based on the structural part, I’m discussing the universal node’s attributes like its primary URI, anchor text and tooltip. The definition of topical anchors is followed by tips on identifying and using alternate anchors, titles, descriptions etc. in various inbound and outbound links.

Linking is All About Popularity and Authority
Well, it should read ‘linking is all about traffic’, but learning more about the backgrounds of natural linkage helps to understand the power and underlying messages of links, which produce indirect traffic. Well linked and outstanding authority sites will become popular by word of mouth. The search engines will follow their users’ votes intuitionally, generating loads of targeted traffic.

Optimizing Web Site Navigation
This chapter is not so much focused on usability, instead I discuss a search engine’s view on site wide navigation elements and tell how to optimize those for the engines. To avoid repetition, I’m referring to my guide on crawler support and other related articles, so this chapter is not a guide on Web site navigation at all.

Search Engine Friendly Click Tracking
Traffic monitoring and traffic management influences a site’s linkage, often to the worst. Counting outgoing traffic per link works quite fine without redirecting scripts, which cause all kind of troubles with search engines and some user agents. I’m outlining an alternative method to track clicks, ready to use source code included.

I’ve got a few notes on the topic left behind, so most probably I’ll add more stuff soon. I hope it’s a good reading, and helpful. Your feedback is very much appreciated:)

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Serious Disadvantages of Selling Links

There is a pretty interesting discussion going on search engine spam at O’Reilly Radar. This topic is somewhat misleading, the subject is passing PageRank™ by paid ads on popular sites. Read the whole thread, lots of sound folks express their valuable and often fascinating opinions.

My personal statement is a plain “Don’t sell links for passing PageRank™. Never. Period.”, but the intention of ad space purchases isn’t always that clear. If an ad isn’t related to my content, I tend to put client sided affiliate links on my sites, because search engine spiders didn’t follow them for a long time. Well, it’s not that easy any more.

However, Matt Cutts ‘revealed’ an interesting fact in the thread linked above. Google indeed applies no-follow-logic to Web sites selling (at least unrelated) ads:

… [Since September 2003] …parts of perl.com, xml.com, etc. have not been trusted in terms of linkage … . Remember that just because a site shows up for a “link:” command on Google does not mean that it passes PageRank, reputation, or anchortext.

This policy wasn’t really a secret before Matt’s post, because a critical mass of high PR links not passing PR do draw a sharp picture. What many site owners selling links in ads have obviously never considered, is the collateral damage with regard to on site optimization. If Google distrusts a site’s linkage, outbound and internal links have no power. That is the optimization efforts on navigational links, article interlinking etc. are pretty much useless on a site selling links. Internal links not passing relevancy via anchor text is probably worse than the PR loss, because clever SEOs always acquire deep inbound links.

Rescue strategy:

1. Implement the change recommended by Matt Cutts:

Google’s view on this is … selling links muddies the quality of the web and makes it harder for many search engines (not just Google) to return relevant results. The rel=nofollow attribute is the correct answer: any site can sell links, but a search engine will be able to tell that the source site is not vouching for the destination page.

2. Write Google (possibly cc spam report and reinclusion request) that you’ve changed the linkage of your ads.

3. Hope and pray, on failure goto 2.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Systematic Link Patterns Kill SE-Traffic

Years ago, Google started a great search engine ranking Web pages by PageRank within topical matches. Altavista was a big player, and a part of its algo ranked by weighted link popularity. Even Inktomi and a few others begun to experiment with linkpop as a ranking criteria.

Search engine optimizers and webmasters launched huge link farms, where thousands of Web sites were linking to each other. From a site owner’s point of view, those link farms, aka spider traps, ‘helped search engine crawlers to index and rank the participating sites’. For a limited period of time, Web sites participating in spider traps were crawled more frequently, and -caused by their linkpop- gained better placements on the search engine result pages.

From a search engine’s point of view, artificial linking for the sole purpose of manipulating search engine rankings is a bad thing. Their clever engineers developed link spam filters, and the engines begun to automatically penalize or even ban sites involved in systematic link patterns.

Back in 2000, removing the artificial links and asking for reinclusion worked for most of the banned sites. Nowadays it’s not that easy to get a banned domain back in the index. Savvy webmasters and serious search engine optimizers found better and honest ways to increase search engine traffic.

However, there are still a lot of link farms out there. Newbies following bad advice still join them, and get caught eventually. Spider trap operators are smart enough to save their ass, but thousands of participating newbies lose the majority of their traffic when a spider trap gets rolled up by the engines. Some spider traps even charge their participants. Google has just begun to work on a link spam network where the operator earns 46,000$ monthly for putting his customers at risk.

Stay away from any automated link exchange ’service’, it’s not worth it. Don’t trust sneaky sales pitches trying to talk you into risky link swaps. Approaches to automatically support honest link trades are limited to administrative tasks. Hire an experienced SEO Consultant for serious help on your link development.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

The Top-5 Methods to Attract Search Engine Spiders

Full sized image copyrighted © by Leech Design 2000Folks on the boards and in news groups waste man years speculating on the best bait to entrap search engine spiders.

Stop posting, listen to the ultimate advice and boost your search engine traffic to the sky within a few months. Here are the five best methods to get a Web site crawled and indexed quickly:

5 Laying out milk and cookies attracts the Googlebot sisters.
4 Creating a Google Sitemap supports the Googlebot sisters.
3 Providing RSS feeds and adding them to MyYahoo decoys Slurp.
2 Placing bold dollar signs ‘$$‘ nearby the copyright or trademark notice drives the MSN bot crazy.
1 Spreading deep inbound links all over the Internet encourages all spiders to deep and frequent crawling and fast indexing as well.

Listen, there is only one single method that counts: #1. Forget everything you’ve heard about search engine indexing. Concentrate all your efforts on publishing fresh content and acquiring related inbound links to your content pages instead.

Link out to valuable pages within the body text and ask for a backlink. Keep your outbound links up, even if you don’t get a link back. Add a page to each content page and use it to trade links on the content page’s topic. Don’t bother with home page link exchanges.

Ignore tricky ‘backdoor’ advice. There is no such thing as a backdoor to a search engine’s index. Open your front door widely for the engines by actively developing deep inbound links. Once you’re indexed and ranked fairly, fine tune your search engine spider support. Best of luck.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2