Internet marketing is one big popularity contest, and that’s not a good thing

SMO - Social Media OptimizationThis is a guest post by Tanner Christensen.

What are you doing to make Internet marketing a better industry to be a part of? As it sits now: Internet marketing is one big popularity contest, and that’s not a good thing. Internet marketers are making it nearly impossible for the average person to find valuable content.

The real online content providers - the websites who deserve all of your attention - are becoming harder and harder to discover because of Internet marketers like us. Though Internet marketers - both you and I - can’t really be blamed, our job is all about getting attention. The more attention we get for our website(s), the more popular our website(s) become, the more money we can make.

But because of the recent surge of interest in Internet marketing and search engine optimization, websites that focus on providing content - rather than getting attention - are being ignored. And because these content-focused websites are being cast into the shadows of attention-focused websites, they too are jumping on the Internet marketing popularity contest bandwagon.

Even though every webmaster and his or her mother is jumping on the bandwagon, it’s not accurate to say that Internet marketers are making all less-important, less-helpful, and less-useful websites more popular than really helpful website, but there is definitely the possibility of real news and information being masked by attention-seeking content.

So what do we do? What do Internet marketers and search engine optimizers do to make sure that the Internet popularity contest doesn’t become a contest of lies and attention-seeking tactics; but rather a contest of quality, helpful, interesting, important, groundbreaking content?

The first step is to become a part of the online community. I’m not talking about the Internet marketing community - it’s biased in a lot of ways. I’m talking about the real online communities. Doing so will help create a universal feeling of online morals; or what’s good information and what is bad information.

And discovering where the real helpful and important websites are online will help Internet marketers such as ourselves learn where the websites we work with really should be ranked.

Sure, there are still those people who don’t care about quality of content and only care about the all-mighty dollar sign. But poor-content will eventually catch up with them, when websites that really deserve attention in the online popularity contest are lost in the fold and the dollar sign loses it’s value.

Tanner is a Web specialist and designer who writes helpful, inspiring, and creative internet-related articles. A while ago I’ve contributed an article to his blog Internet Hunger: The anatomy of a debunking post. I think “can agessive SMO tactics push crap on the long haul” would be an interesting, and related discussion. I mean, search engines evolve too, not only in Web search, so kinda fair rankings of well linked crap as well as good stuff not on the SM radar might be possible to some extent.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Text link broker woes: Google’s smart paid link sniffers

Google's smart paid link sniffer at workAfter the recent toolbar PageRank massacre link brokers are in the spotlight. One of them, TNX beta1, asked me to post a paid review of their service. It took a while to explain that nobody can buy a sales pitch here. I offered to write a pitiless honest review for a low hourly fee, provided a sample on their request, but got no order or payment yet. Never mind. Since the topic is hot, here’s my review, paid or not.

So what does TNX offer? Basically it’s a semi-automated link exchange where everybody can sign up to sell and/or purchase text links. TNX takes 25% commission, 12.5% from the publisher, and 12.5% from the advertiser. They calculate the prices based on Google’s toolbar PageRank and link popularity pulled from Yahoo. For example a site putting five blocks of four links each on one page with toolbar PageRank 4/10 and four pages with a toolbar PR 3/10 will earn $46.80 monthly.

TNX provides a tool to vary the links, so that when an advertiser purchases for example 100 links it’s possible to output those in 100 variations of anchor text as well as surrounding text before and after the A element, on possibly 100 different sites. Also TNX has a solution to increase the number of links slowly, so that search engines can’t find a gazillion of uniformed links to a (new) site all of a sudden. Whether or not that’s sufficient to simulate natural link growth remains an unanswered question, because I’ve no access to their algorithm.

Links as well as participating sites are reviewed by TNX staff, and frequently checked with bots. Links shouldn’t appear on pages which aren’t indexed by search engines or viewed by humans, or on 404 pages, pages with long and ugly URLs and such. They don’t accept PPC links or offensive ads.

All links are outputted server sided, what requires PHP or Perl (ASP/ASPX coming soon). There is a cache option, so it’s not necessary to download the links from the TNX servers for each page view. TNX recommends renaming the /cache/ directory to avoid an easily detectable sign for the occurence of TNX paid links on a Web site. Links are stored as plain HTML, besides the target="_blank" attribute there is no obvious footprint or pattern on link level. Example:
Have a website? See this <a href="http://www.example.com" target="_blank">free affiliate program</a>.
Have a blog? Check this <a href="http://www.example.com" target="_blank">affiliate program with high comissions</a> for publishers.

Webmasters can enter any string as delimiter, for example <br /> or “•”:

Have a website? See this free affiliate program. • Have a blog? Check this affiliate program with high comissions for publishers.

Publishers can choose from 17 niches, 7 languages, 5 linkpop levels, and 7 toolbar PageRank values to target their ads.

From the system stats in the members area the service is widely used:

  • As of today [2007-11-06] we have 31,802 users (daily growth: +0.62%)
  • Links in the system: 31,431,380
  • Links created in last hour: 1,616
  • Number of pages indexed by TNX: 37,221,398

Long story short, TNX jumped through many hoops to develop a system which is supposed to trade paid links that are undetectable by search engines. Is that so?

The major weak point is the system’s growth and that its users are humans. Even if such a system would be perfect, users will make mistakes and reveal the whole network to search engines. Here is how Google has identified most if not all of the TNX paid links:

Some Webmasters put their TNX links in sidebars under a label that identifies them as paid links. Google crawled those pages, and stored the link destinations in its paid links database. Also, they devalued at least the labelled links, if not the whole page or even the complete site lost its ability to pass link juice because the few paid links aren’t condomized.

Many Webmasters implemented their TNX links in templates, so that they appear on a large number of pages. Actually, that’s recommended by TNX. Even if the advertisers have used the text variation tool, their URLs appeared multiple times on each site. Google can detect site wide links, even if not each and every link appears on all pages, and flags them accordingly.

Maybe even a few Googlers have signed up and served the TNX links on their personal sites to gather examples, although that wasn’t neccessary because so many Webmasters with URLs in their signatures have told Google in this DP thread that they’ve signed up and at least tested TNX links on their pages.

Next Google compared the anchor text as well as the surrounding text of all flagged links, and found some patterns. Of course putting text before and after the linked anchor text seems to be a smart way to fake a natural link, but in fact Webmasters applied a bullet-proof procedure to outsmart themselves, because with multiple occurences of the same text constellations pointing to an URL, especially when found on unrelated sites (different owners, hosts etc., topically irrelevancy plays no role in this context), paid link detection is a breeze. Linkage like that may be “natural” with regard to patterns like site wide advertising or navigation, but a lookup in Google’s links database revealed that the same text constellations and URLs were found on n  other sites too.

Now that Google had compiled the seed, each and every instance of Googlebot delivered more evidence. It took Google only one crawl cycle to identify most sites carrying TNX links, and all TNX advertisers. Paid link flags from pages on sites with a low crawling frequency were delivered in addition. Meanwhile Google has drawed a comprehensive picture of the whole TNX network.

I’ve developed such a link network many years ago (it’s defunct now). It was successful because only very experienced Webmasters controlling a fair amount of squeaky clean sites were invited. Allowing newbies to participate in such an organized link swindle is the kiss of death, because newbies do make newbie mistakes, and Google makes use of newbie mistakes to catch all participants. By the way, with the capabilities Google has today, my former approach to manipulate rankings with artificial linkage would be detectable with statistical methods similar to the algo outlined above, despite the closed circle of savvy participants.

From reading the various DP threads about TNX as well as their sales pitches, I’ve recognized a very popular misunderstanding of Google’s mentality. Folks are worrying whether an algo can detect the intention of links or not, usually focusing on particular links or linking methods. Google on the other hand looks at the whole crawlable Web. When they develop a paid link detection algo, they have a copy of the known universe to play with, as well as a complete history of each and every hyperlink crawled by Ms. Googlebot since 1998 or so. Naturally, their statistical methods will catch massive artificial linkage first, but fine tuning the sensitivity of paid link sniffers respectively creating variants to cover different linking patterns is no big deal. Of course there is always a way to hide a paid link, but nobody can hide millions of them.

Unfortunately, the unique selling point of the TNX service –that goes for all link brokers by the way– is manipulation of search engine rankings, hence even if they would offer nofollow’ed links to trade traffic instead of PageRank, most probably they would be forced to reduce the prices. Since TNX links are rather cheap, I’m not sure that will pay. It would be a shame when they decide to change the business model but it doesn’t pay for TNX, because the underlying concept is great. It just shouldn’t be used to exchange clean links. All the tricks developed to outsmart Google, like the text variation tool or not putting links on not exactly trafficked pages, are suitable to serve non-repetitive ads (coming with attractive CTRs) to humans.

I’ve asked TNX: I’ve decided to review your service on my blog, regardless whether you pay me or not. The result of my research is that I can’t recommend TNX in its current shape. If you still want a paid review, and/or a quote in the article, I’ve a question: Provided Google has drawn a detailed picture of your complete network, are you ready to switch to nofollow’ed links in order to trade traffic instead of PageRank, possibly with slightly reduced prices? Their answer:

We would be glad to accept your offer of a free review, because we don’t want to pay for a negative review.
Nobody can draw a detailed picture of our network - it’s impossible for one advertiser to buy links from all or a majority sites of our network. Many webmasters choose only relevant advertisers.
We will not switch to nofollow’ed links, but we are planning not to use Google PR for link pricing in the near future - we plan to use our own real-time page-value rank.

Well, it’s not necessary to find one or more links on all sites to identify a network.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

If you suffer from a dead slow FireFox browser …

The FireFox prefs-9999.js pitfall… and you’ve tried all the free-memory-on-minimize tricks out there, you could do a couple of things.

You could try to update to the newest version. If you’re using old extensions like I do, FireFox has already sent a bazillion of security alerts trying to talk you into the update process. Well, you’d get rid of the update alerts, but that doesn’t solve all your problems, because your beloved extensions were deactivated automatically.

A reinstall is brutal, but helps. Unfortunately you’ll lose a lot of stuff.

Waiting until your history.dat file exceeds 12 gigs and session saver forced the creation of prefs-9999.js in your profile is another way to handle the crisis. The prefs-9999.js is the ultimate prefs file, which stores all your settings by the way. Once FireFox creates it, it stops working all of a sudden and cannot be restarted.

I figured out that such a prefs-n.js file is created up to a few times daily, and with every new file FireFox slows down a bit. It is absolutely genial that in times of 64 bit integers the file counter has such a low hard limit. I mean I wouldn’t surf any more in a few weeks if my browser would become much slower.

So I appreciated the heart attack forcing me to look into the issue, deleted all prefs-*.js files but kept prefs.js itself, and now I’ve my fast FireFox1.5 back. I lost a few open tabs and such, because I was too lazy to rename prefs-9999.js to prefs-1.js. Memory allocation right after starting the browser went down to laughable 300 megs and still counting, so I guess there’s some work left when I want to continue surfing with hundreds of tabs in several project specific windows. Sigh.

I’m an old fart and tend to forget such things, hence I post it and next time my browser slows down I just ask you guys on Twitter what to do with it. Thanks in advance for your help.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Gaming Sphinn is not worth it

Thou shalt not spam Sphinn!OMFG, yet another post on Sphinn? Yup. I tell you why gaming Sphinn is counter productive, because I just don’t want to read another whiny rant in the lines of “why do you ignore my stuff whilst A listers [whatever this undefined term means] get their crap sphunn hot in no time”. Also, discussions assuming that success equals bad behavior like this or this one aren’t exactly funny nor useful. As for the whiners: Grow the fuck up and produce outstanding content, then network politely but not obtrusive to promote it. As for the gamers: Think before you ruin your reputation!

What motivates a wannabe Internet marketer to game Sphinn?

Traffic of course, but that’s a myth. Sphinn sends very targeted traffic but also very few visitors (see my stats below).

Free uncondomized links. Ok, that works, one can gain enough link love to get a page indexed by the search engines, but for this purpose it’s not necessary to push the submission to the home page.

Attention is up next. Yep, Sphinn is an eldorado for attention whores, but not everybody is an experienced high-class call girl. Most are amateurs giving it a (first) try, or wrecked hookers pushing too hard to attract positive attention.

The keyword is positive attention. Sphinners are smart, they know every trick in the book. Many of them make a living with gaming creative use of social media. Cheating professional gamblers is a waste of time, and will not produce positive attention. Even worse, the shit sticks at the handle of the unsuccessful cheater (and in many cases the real name). So if you want to burn your reputation, go found a voting club to feed your crap.

Fortunately, getting caught for artificial voting at Sphinn comes with devalued links too. The submitted stories are taken off the list, that means no single link at Sphinn (besides profile pages) feeds them any more, hence search engines forget them. Instead of a good link from an unpopular submission you get zilch when you try to cheat your way to the popular links pages.

Although Sphinn doesn’t send shitloads of traffic, this traffic is extremely valuable. Many spinners operate or control blogs and tend to link to outstanding articles they found at Sphinn. Many sphinners have accounts on other SM sites too, and bookmark/cross-submit good content. It’s not unusual that 10 visits from Sphinn result in hundreds or even thousands of hits from StumbleUpon & Co. — but spinners don’t bookmark/blog/cross-submit/stumble crap.

So either write great content and play by the rules, or get nowhere with your crappy submission. The first “10 reasons why 10 tricks posts about 10 great tips to write 10 numbered lists” submission was fun. The 10,000 plagiarisms following were just boring noise. Nobody except your buddies or vote bots sphinn crap like that, so don’t bother to provide the community with footprints of your lousy gaming.

If you’re playing number games, here is why ruining a reputation by gaming Sphinn is not worth it. Look at my visitor stats from July to today. I got 3.6k referrers in 4 months from Sphinn because a few of my posts went hot. When a post sticks with 1-5 votes, you won’t attract much more click throughs than from those 1-5 folks who sphunn it (that would give 100-200 hits or so with the same amount of submissions). When you cheat, the story gets buried and you get nothing but flames. Think about that. Thanks.

Rank Last Date/Time Referral Site Count
1 Oct 09, 2007 @ 23:29 http: / / sphinn.com/ story/ 1622 504
2 Oct 23, 2007 @ 14:53 http: / / sphinn.com/ story/ 2764 419
3 Nov 01, 2007 @ 03:42 http: / / sphinn.com 293
4 Oct 08, 2007 @ 04:21 http: / / sphinn.com/ story/ 5469 288
5 Nov 02, 2007 @ 13:35 http: / / sphinn.com/ story/ 8883 192
6 Oct 09, 2007 @ 23:38 http: / / sphinn.com/ story/ 4335 185
7 Oct 22, 2007 @ 23:55 http: / / sphinn.com/ story/ 5362 139
8 Oct 29, 2007 @ 15:02 http: / / sphinn.com/ upcoming 131
9 Nov 02, 2007 @ 13:34 http: / / sphinn.com/ story/ 7170 131
10 Sep 10, 2007 @ 09:09 http: / / sphinn.com/ story/ 1976 116
11 Oct 15, 2007 @ 22:40 http: / / sphinn.com/ story/ 6122 113
12 Sep 22, 2007 @ 13:39 http: / / sphinn.com/ story/ 3593 90
13 Oct 05, 2007 @ 21:56 http: / / sphinn.com/ story/ 5648 87
14 Sep 22, 2007 @ 13:25 http: / / sphinn.com/ story/ 4072 80
15 Oct 14, 2007 @ 17:24 http: / / sphinn.com/ story/ 5973 77
16 Aug 30, 2007 @ 04:17 http: / / sphinn.com/ story/ 1796 72
17 Oct 16, 2007 @ 05:46 http: / / sphinn.com/ story/ 6761 61
18 Oct 11, 2007 @ 05:56 http: / / sphinn.com/ story/ 1447 60
19 Sep 13, 2007 @ 12:27 http: / / sphinn.com/ story/ 4548 54
20 Nov 02, 2007 @ 22:14 http: / / sphinn.com/ story/ 11547 53
21 Sep 03, 2007 @ 09:34 http: / / sphinn.com/ story/ 4068 44
22 Oct 09, 2007 @ 23:40 http: / / sphinn.com/ story/ 5093 42
23 Nov 02, 2007 @ 01:46 http: / / sphinn.com/ story/ 248 41
24 Sep 14, 2007 @ 05:58 http: / / sphinn.com/ story/ 2287 36
25 Oct 31, 2007 @ 06:17 http: / / sphinn.com/ story/ 11205 35
26 Oct 07, 2007 @ 12:07 http: / / sphinn.com/ story/ 6124 25
27 Nov 01, 2007 @ 09:41 http: / / sphinn.com/ user/ view/ profile/ Sebastian 22
28 Aug 08, 2007 @ 10:52 http: / / sphinn.com/ story/ 245 21
29 Sep 02, 2007 @ 19:17 http: / / sphinn.com/ story/ 3877 17
30 Sep 22, 2007 @ 00:42 http: / / sphinn.com/ story/ 4968 17
31 Oct 01, 2007 @ 12:49 http: / / sphinn.com/ story/ 5310 17
32 Aug 30, 2007 @ 08:20 http: / / sphinn.com/ story/ 4143 14
33 Sep 11, 2007 @ 21:38 http: / / sphinn.com/ story/ 3783 13
34 Nov 01, 2007 @ 15:50 http: / / sphinn.com/ published/ page/ 2 11
35 Sep 01, 2007 @ 23:03 http: / / sphinn.com/ story/ 597 10
36 Oct 24, 2007 @ 18:17 http: / / sphinn.com/ story/ 1767 10
37 Sep 15, 2007 @ 08:26 http: / / sphinn.com/ story.php? id= 5469 8
38 Oct 30, 2007 @ 09:42 http: / / sphinn.com/ upcoming/ mostpopular 7
39 Oct 24, 2007 @ 18:38 http: / / sphinn.com/ story/ 10881 7
40 Oct 30, 2007 @ 01:19 http: / / sphinn.com/ upcoming/ page/ 2 6
41 Sep 20, 2007 @ 07:09 http: / / sphinn.com/ user/ view/ profile/ login/ Sebastian 5
42 Jul 22, 2007 @ 09:39 http: / / sphinn.com/ story/ 1017 5
43 Oct 13, 2007 @ 08:34 http: / / sphinn.com/ published/ week 5
44 Sep 08, 2007 @ 04:17 http: / / sphinn.com/ story/ 4653 5
45 Oct 31, 2007 @ 06:55 http: / / sphinn.com/ story/ 11614 5
46 Aug 13, 2007 @ 03:06 http: / / sphinn.com/ story/ 2764/ editcomment/ 4018 4
47 Aug 23, 2007 @ 07:52 http: / / sphinn.com/ story.php? id= 3593 4
48 Sep 20, 2007 @ 06:21 http: / / sphinn.com/ published/ page/ 1 4
49 Oct 23, 2007 @ 15:01 http: / / sphinn.com/ story/ 748 3
50 Jul 29, 2007 @ 10:47 http: / / sphinn.com/ story/ title/ Google- launched- a- free- ranking- checker 3
51 Sep 30, 2007 @ 21:13 http: / / sphinn.com/ category/ Google/ parent_ name/ Google 3
52 Aug 25, 2007 @ 04:47 http: / / sphinn.com/ story.php? id= 3735 3
53 Sep 15, 2007 @ 11:28 http: / / sphinn.com/ story.php? id= 5648 3
54 Sep 29, 2007 @ 01:35 http: / / sphinn.com/ story/ 7058 3
55 Oct 28, 2007 @ 22:56 http: / / sphinn.com/ greatesthits 3
56 Oct 23, 2007 @ 04:44 http: / / sphinn.com/ story/ 10380 3
57 Oct 27, 2007 @ 04:10 http: / / sphinn.com/ story/ 11233 3
58 Jul 13, 2007 @ 04:23 Google Search: http: / / sphinn.com 2
59 Jul 21, 2007 @ 03:19 http: / / sphinn.com/ story.php? id= 849 2
60 Jul 27, 2007 @ 10:06 http: / / sphinn.com/ story.php? id= 1447 2
61 Jul 30, 2007 @ 20:09 http: / / sphinn.com/ story.php? id= 1796 2
62 Aug 07, 2007 @ 10:01 http: / / sphinn.com/ published/ page/ 3 2
63 Aug 13, 2007 @ 11:20 http: / / sphinn.com/ story.php? id= 2764 2
64 Sep 05, 2007 @ 05:23 http: / / sphinn.com/ story/ 3735 2
65 Aug 28, 2007 @ 01:56 http: / / sphinn.com/ story.php? id= 3877 2
66 Aug 27, 2007 @ 10:01 http: / / sphinn.com/ submit.php? url= http: / / sebastians- pamphlets.com/ links/ categories 2
67 Aug 31, 2007 @ 14:13 http: / / sphinn.com/ story.php? id= 4335 2
68 Sep 02, 2007 @ 14:29 http: / / sphinn.com/ story.php? id= 1622 2
69 Sep 08, 2007 @ 19:48 http: / / sphinn.com/ story.php? id= 4548 2
70 Sep 05, 2007 @ 01:07 http: / / sphinn.com/ submit.php? url= http: / / sebastians- pamphlets.com/ why- ebay- and- wikipedia- rule- googles- serps 2
71 Sep 06, 2007 @ 13:22 http: / / sphinn.com/ published/ page/ 4 2
72 Sep 16, 2007 @ 13:30 http: / / sphinn.com/ story.php? id= 3783 2
73 Sep 18, 2007 @ 11:55 http: / / sphinn.com/ story.php? id= 5973 2
74 Sep 19, 2007 @ 08:15 http: / / sphinn.com/ story.php? id= 6122 2
75 Sep 19, 2007 @ 14:37 http: / / sphinn.com/ story.php? id= 6124 2
76 Oct 23, 2007 @ 00:07 http: / / sphinn.com/ story/ 10387 2
77 Jul 16, 2007 @ 18:21 http: / / sphinn.com/ upcoming/ category/ AllCategories/ parent_ name/ All Categories 1
78 Jul 19, 2007 @ 20:19 http: / / sphinn.com/ story/ 864 1
79 Jul 20, 2007 @ 15:57 http: / / sphinn.com/ story/ title/ Buy- Viagra- from- Reddit 1
80 Jul 27, 2007 @ 10:48 http: / / sphinn.com/ story/ title/ Blogger- to- rule- search- engine- visibility 1
81 Jul 31, 2007 @ 06:07 http: / / sphinn.com/ story/ title/ The- Unavailable- After- tag- is- totally- and- utterly- useless 1
82 Aug 02, 2007 @ 14:45 http: / / sphinn.com/ user/ view/ history/ login/ Sebastian 1
83 Aug 03, 2007 @ 10:59 http: / / sphinn.com/ story.php? id= 1976 1
84 Aug 06, 2007 @ 03:59 http: / / sphinn.com/ user/ view/ commented/ login/ Sebastian 1
85 Aug 15, 2007 @ 08:27 http: / / sphinn.com/ category/ LinkBuilding 1
86 Aug 15, 2007 @ 14:17 http: / / sphinn.com/ story/ 2764/ editcomment/ 4362 1
87 Aug 28, 2007 @ 13:42 http: / / sphinn.com/ story/ 849 1
88 Sep 09, 2007 @ 15:15 http: / / sphinn.com/ user/ view/ commented/ login/ flyingrose 1
89 Sep 10, 2007 @ 05:15 http: / / sphinn.com/ published/ page/ 20 1
90 Sep 10, 2007 @ 05:55 http: / / sphinn.com/ published/ page/ 19 1
91 Sep 11, 2007 @ 12:22 http: / / sphinn.com/ published/ page/ 8 1
92 Sep 11, 2007 @ 23:13 http: / / sphinn.com/ category/ Blogging 1
93 Sep 12, 2007 @ 09:04 http: / / sphinn.com/ story.php? id= 5362 1
94 Sep 13, 2007 @ 06:36 http: / / sphinn.com/ category/ GoogleSEO/ parent_ name/ Google 1
95 Sep 14, 2007 @ 08:21 http: / / hwww.sphinn.com 1
96 Sep 16, 2007 @ 14:52 http: / / sphinn.com/ GoogleSEO/ Did- Matt- Cutts- by- accident- reveal- a- sure- fire- procedure- to- identify- supplemental- results 1
97 Sep 18, 2007 @ 08:05 http: / / sphinn.com/ story/ 5721 1
98 Sep 18, 2007 @ 09:08 http: / / sphinn.com/ story/ title/ If- yoursquore- not- an- Amway- millionaire- avoid- BlogRush- like- the- plague 1
99 Sep 18, 2007 @ 10:02 http: / / sphinn.com/ story/ 5973#wholecomment8559 1
100 Sep 19, 2007 @ 11:48 http: / / sphinn.com/ user/ view/ voted/ login/ bhancock 1
101 Sep 19, 2007 @ 20:27 http: / / sphinn.com/ published/ page/ 5 1
102 Sep 20, 2007 @ 00:39 http: / / blogmarks.net/ my/ marks,new? title= How to get the perfect logo for your blog& url= http: / / sebastians- pamphlets.com/ how- to- get- the- perfect- logo- for- your- blog/ & summary= & via= http: / / sphinn.com/ story/ 6122 1
103 Sep 20, 2007 @ 01:34 http: / / sphinn.com/ user/ page/ 3/ voted/ Wiep 1
104 Sep 24, 2007 @ 15:49 http: / / sphinn.com/ greatesthits/ page/ 3 1
105 Sep 24, 2007 @ 19:51 http: / / sphinn.com/ story.php? id= 6761 1
106 Sep 24, 2007 @ 22:32 http: / / sphinn.com/ greatesthits/ page/ 2 1
107 Sep 26, 2007 @ 15:13 http: / / sphinn.com/ story.php? id= 7170 1
108 Sep 29, 2007 @ 05:27 http: / / sphinn.com/ category/ SphinnZone 1
109 Oct 09, 2007 @ 11:44 http: / / sphinn.com/ story.php? id= 8883 1
110 Oct 10, 2007 @ 10:04 http: / / sphinn.com/ published/ month 1
111 Oct 24, 2007 @ 15:07 http: / / sphinn.com/ story.php? id= 10881 1
112 Oct 26, 2007 @ 09:53 http: / / sphinn.com/ story.php? id= 11205 1
113 Oct 30, 2007 @ 08:58 http: / / sphinn.com/ upcoming/ page/ 3 1
114 Oct 30, 2007 @ 12:31 http: / / sphinn.com/ upcoming/ most 1
Total  3,688


Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

The day the routers died

Why the fuck do we dumb and clueless Internet marketers care about Google’s Toolbar PageRank when the Internet faces real issues? Well, both the toolbar slider as well as IPv4 are somewhat finite.

I can hear the IM crowd singing “The day green pixels died” … whilst Matt’s gang in building 43 intones “No mercy, smack paid links, no place to hide for TLA links” … Enjoy this video, it’s friggin’ hilarious:

 

Since Gary Feldman’s song “The Day The Routers Died” will become an evergreen soon, I thought you might be interested in a transcript:

A long long time ago
I can still remember
When my laptop could connect elsewhere.

And I tell you all there was a day
The network card I threw away
Had a purpose and it worked for you and me.

But 18 years completely wasted
With each address we’ve aggregated
The tables overflowing
The traffic just stopped flowing.

And now we’re bearing all the scars
And all my traceroutes showing stars
The packets would travel faster in cars
The day the routers died.

So bye bye, folks at RIPE:55
Be persuaded to upgrade it or your network will die
IPv6 makes me let out a sigh
But I spose we’d better give it a try
I suppose we’d better give it a try!

Now did you write an RFC
That dictated how we all should be
Did we listen like we should that day?

Now were you back at RIPE fifty-four
Where we heard the same things months before
And the people knew they’d have to change their ways.

And we knew that all the ISPs
Could be future proof for centuries.

But that was then not now
Spent too much time playing WoW.

Ooh there was time we sat on IRC
Making jokes on how this day would be
Now there’s no more use for TCP
The day the routers died.

So bye bye, folks at RIPE:55
Be persuaded to upgrade it or your network will die
IPv6 just makes me let out a sigh
But I spose we’d better give it a try
I suppose we’d better give it a try!

I remember those old days I mourn
Sitting in my room, downloading porn
Yeah that’s how it used to be.

When the packets flowed from A to B
Via routers that could talk IP
There was data [that] could be exchanged between you and me.

Oh but I could see you all ignore
The fact we’d fill up IPv4!

But we all lost the nerve
And we got what we deserved!

And while we threw our network kit away
And wished we’d heard the things they say
Put all our lives in disarray
The day the routers died.

So bye bye, folks at RIPE:55
Be persuaded to upgrade it or your network will die
IPv6 just makes me let out a sigh
But I spose we’d better give it a try
I suppose we’d better give it a try!

Saw a man with whom I used to peer
Asked him to rescue my career
He just sighed and turned away.

I went down to the ‘net cafe
That I used to visit everyday
But the man there said I might as well just leave.

[And] now we’ve all lost our purpose
My cisco shares completely worthless
No future meetings for me
At the Hotel Krasnapolsky.

And the men that make us push and push
Like Geoff Huston and Randy Bush
Should’ve listened to what they told us
The day the routers died.

So bye bye, folks at RIPE:55
Be persuaded to upgrade it or your network will die
IPv6 just makes me let out a sigh
But I spose we’d better give it a try
[I suppose we’d better give it a try!]

Recorded at the RIPE:55 meeting in Amsterdam (NL) at the Krasnapolsky Hotel between 22 and 26 October 2007.

Just in case the video doesn’t load, here is another recording.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

A pragmatic defence against Google’s anti paid links campaign

Google’s recent shot across the bows of a gazillion sites handling paid links, advertising, or internal cross links not compliant to Google’s imagination of a natural link is a call for action. Google’s message is clear: “condomize your commercial links or suffer” (from deducted toolbar PageRank, links without the ability to pass real PageRank and relevancy signals, or perhaps even penalties).

Paid links: good versus evilOf course that’s somewhat evil, because applying nofollow values to all sorts of links is not exactly a natural thing to do; visitors don’t care about invisible link attributes and sometimes they’re even pissed when they get redirected to an URL not displayed in their status bar. Also, this requirement forces Webmasters to invest enormous efforts in code maintenance for the sole purpose of satisfying search engines. The argument “if Google doesn’t like these links, then they can discount them in their system, without bothering us” has its merits, but unfortunately that’s not the way Google’s cookie crumbles for various reasons. Hence lets develop a pragmatic procedure to handle those links.

The problem

Google thinks that uncondomized paid links as well as commercial links to sponsors or affiliated entities aren’t natural, because the terms “sponsor|pay for review|advertising|my other site|sign-up|…” and “editorial vote” are not compatible in the sense of Google’s guidelines. This view at the Web’s linkage is pretty black vs. white.

Either you link out because a sponsor bought ads, or you don’t sell ads and link out for free because you honestly think your visitors will like a page. Links to sponsors without condom are black, links to sites you like and which you don’t label “sponsor” are white.

There’s nothing in between, respectively gray areas like links to hand picked sponsors on a page with a gazillion of links count as black. Google doesn’t care whether or not your clean links actually pass a reasonable amount of PageRank to link destinations which buy ad space too, the sole possibility that those links could  influence search results is enough to qualify you as sort of a link seller.

The same goes for paid reviews on blogs and whatnot, see for example Andy’s problem with his honest reviews which Google classifies as paid links, and of course all sorts of traffic deals, affiliate links, banner ads and stuff like that.

You don’t even need to label a clean link as advert or sponsored. If the link destination matches a domain in Google’s database of on-line advertisers, link buyers, e-commerce sites / merchants etcetera, or Google figures out that you link too much to affiliated sites or other sites you own or control, then your toolbar PageRank is toast and most probably your outgoing links will be penalized. Possibly these penalties have impact on your internal links too, what results in less PageRank landing on subsidiary pages. Less PageRank gathered by your landing pages means less crawling, less ranking, less SERP referrers, less revenue.

The solution

You’re absolutely right when you say that such search engine nitpicking should not force you to throw nofollow crap on your links like confetti. From your and my point of view condomizing links is wrong, but sometimes it’s better to pragmatically comply to such policies in order to stay in the game.

Although uncrawlable redirect scripts have advantages in some cases, the simplest procedure to condomize a link is the rel-nofollow microformat. Here is an example of a googlified affiliate link:
<a href="http://sponsor.com/?affID=1" rel="nofollow">Sponsor</a>

Why serve your visitors search engine crawler directives?

Complying to Google’s laws does not mean that you must deliver crawler directives like rel=”nofollow” to your visitors. Since Google is concerned about search engine rankings influenced by uncondomized links with commercial intent, serving crawler directives to crawlers and clean links to users is perfectly in line with Google’s goals. Actually, initiatives like the X-Robots-Tag make clear that hiding crawler directives from users is fine with Google. To underline that, here is a quote from Matt Cutts:

[…] If you want to sell a link, you should at least provide machine-readable disclosure for paid links by making your link in a way that doesn’t affect search engines. […]

The other best practice I’d advise is to provide human readable disclosure that a link/review/article is paid. You could put a badge on your site to disclose that some links, posts, or reviews are paid, but including the disclosure on a per-post level would better. Even something as simple as “This is a paid review” fulfills the human-readable aspect of disclosing a paid article. […]

Google’s quality guidelines are more concerned with the machine-readable aspect of disclosing paid links/posts […]

To make sure that you’re in good shape, go with both human-readable disclosure and machine-readable disclosure, using any of the methods [uncrawlable redirects, rel-nofollow] I mentioned above.
[emphasis mine]

Since Google devalues paid links anyway, search engine friendly cloaking of rel-nofollow for Googlebot is a non-issue with advertisers, as long as this fact is disclosed. I bet most link buyers look at the magic green pixels anyway, but that’s their problem.

How to cloak rel-nofollow for search engine crawlers

I’ll discuss a PHP/Apache example, but this method is adaptable to other server sided scripting languages like ASP or so with ease. If you’ve a static site and PHP is available on your (*ix) host, you need to tell Apache that you’re using PHP in .html (.htm) files. Put this statement in your root’s .htaccess file:
AddType application/x-httpd-php .html .htm

Next create a plain text file, insert the code below, and upload it as “funct_nofollow.php” or so to your server’s root directory (or a subdirectory, but then you need to change some code below).
<?php
function makeRelAttribute ($linkClass) {
$numargs = func_num_args();
// optional 2nd input parameter: $relValue
if ($numargs >= 2) {
$relValue = func_get_arg(1) ." ";
}
$referrer = $_SERVER["HTTP_REFERER"];
$refUrl = parse_url($referrer);
$isSerpReferrer = FALSE;
if (stristr($refUrl[host], "google.") ||
stristr($refUrl[host], "yahoo."))
$isSerpReferrer = TRUE;
$userAgent = $_SERVER["HTTP_USER_AGENT"];
$isCrawler = FALSE;
if (stristr($userAgent, "Googlebot") ||
stristr($userAgent, "Slurp"))
$isCrawler = TRUE;
if ($isCrawler /*|| $isSerpReferrer*/ ) {
if ("$linkClass" == "ad") $relValue .= "advertising nofollow";
if ("$linkClass" == "paid") $relValue .= "sponsored nofollow";
if ("$linkClass" == "own") $relValue .= "affiliated nofollow";
if ("$linkClass" == "vote") $relValue .= "editorial dofollow";
}
if (empty($relValue))
return "";
return " rel=\"" .trim($relValue) ."\" ";
} // end function makeRelValue
?>

Next put the code below in a PHP file you’ve included in all scripts, for example header.php. If you’ve static pages, then insert the code at the very top.
<?php
@include($_SERVER["DOCUMENT_ROOT"] ."/funct_nofollow.php");
?>

Do not paste the function makeRelValue itself! If you spread code this way you’ve to edit tons of files when you need to change the functionality later on.

Now you can use the function makeRelValue($linkClass,$relValue) within the scripts or HTML pages. The function has an input parameter $linkClass and knows the (self-explanatory) values “ad”, “paid”, “own” and “vote”. The second (optional) input parameter is a value for the A element’s REL attribute itself. If you provide it, it gets appended, or, if makeRelValue doesn’t detect a spider, it creates a REL attribute with this value. Examples below. You can add more user agents, or serve rel-nofollow to visitors coming from SERPs by enabling the || $isSerpReferrer condition (remove the bold /*&*/).

When you code a hyperlink, just add the function to the A tag. Here is a PHP example:
print "<a href=\"http://google.com/\"" .makeRelAttribute("ad") .">Google</a>";

will output
<a href="http://google.com/" rel="advertising nofollow" >Google</a>
when the user agent is Googlebot, and
<a href="http://google.com/">Google</a>
to a browser.

If you can’t write nice PHP code, for example because you’ve to follow crappy guidelines and worst practices with a WordPress blog, then you can mix HTML and PHP tags:
<a href="http://search.yahoo.com/"<?php print makeRelAttribute("paid"); ?>>Yahoo</a>

Please note that this method is not safe with search engines or unfriendly competitors when you want to cloak for other purposes. Also, the link condoms are served to crawlers only, that means search engine staff reviewing your site with a non-crawler user agent name won’t spot the nofollow’ed links unless they check the engine’s cached page copy. An HTML comment in HEAD like “This site serves machine-readable disclosures, e.g. crawler directives like rel-nofollow applied to links with commercial intent, to Web robots only.” as well as a similar comment line in robots.txt would certainly help to pass reviews by humans.

A Google-friendly way to handle paid links, affiliate links, and cross linking

Load this page with different user agents and referrers. You can do this for example with a FireFox extension like PrefBar. For testing purposes you can use these user agent names:
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)

and these SERP referrer URLs:
http://google.com/search?q=viagra
http://search.yahoo.com/search?p=viagra&ei=utf-8&iscqry=&fr=sfp

Just enter these values in PrefBar’s user agent respectively referrer spoofing options (click “Customize” on the toolbar, select “User Agent” / “Referrerspoof”, click “Edit”, add a new item, label it, then insert the strings above). Here is the code above in action:

Referrer URL:
User Agent Name: CCBot/2.0 (http://commoncrawl.org/faq/)
Ad makeRelAttribute(”ad”): Google
Paid makeRelAttribute(”paid”): Yahoo
Own makeRelAttribute(”own”): Sebastian’s Pamphlets
Vote makeRelAttribute(”vote”): The Link Condom
External makeRelAttribute(”", “external”): W3C rel="external"
Without parameters makeRelAttribute(”"): Sphinn

When you change your browser’s user agent to a crawler name, or fake a SERP referrer, the REL value will appear in the right column.

When you’ve developed a better solution, or when you’ve a nofollow-cloaking tutorial for other programming languages or platforms, please let me know in the comments. Thanks in advance!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google Toolbar PageRank deductions make sense

Google policing the Web's linkageSince toolbar PR is stale since April, and now only a few sites were “updated” without any traffic losses, I can imagine that’s just a “watch out” signal from Google, not yet a penalty. Of course it’s not a conventional toolbar PageRank update, because new pages aren’t affected. That means the deductions are not caused by a finite amount of PageRank spread over more pages discovered by Google since the last toolbar PR update.

Unfortunately, in the current toolbar PR hysteria next to nobody tries to figure out Google’s message. Crying foul is not very helpful, since Google is not exactly known as a company revising such decisions based on Webmaster rants lashing “unfair penalties”.

By the way, I think Andy is spot on. Paid links are definitively a cause of toolbar PageRank downgrades. Artificial links of any kind is another issue. Google obviously has a different take on interlinking respectively crosslinking for example. Site owners argue that it makes business sense, but Google might think most of these links come without value for their users. And there are tons more pretty common instances of “link monkey business”.

Maybe Google alerts all sorts of sites violating the SEO bible’s twelve commandments with a few less green pixels, before they roll out new filters which would catch those sins and penalize the offending pages accordingly. Actually, this would make a lot of sense.

All site owners and Webmasters monitor their toolbar PR. Any significant changes are discussed in a huge community. If the crowd assumes that artifical links cause toolbar PR deductions, many sites will change their linkage. This happened already after the first shot across the bows two weeks ago. And it will work again. Google gets the desired results: less disliked linkage, less sites selling uncondomized links.

That’s quite smart. Google has learned that they can’t ban or overpenalize popular sites, because that leads to fucked up search results for not only navigational search queries, in other words pissed searchers. Taking back a few green pixels from the toolbar on the other hand is not an effective penalty, because toolbar PR is unrelated to everything that matters. It is however a message with guaranteed delivery.

Running algos in development stage on the whole index and using their findings to manipulate toolbar PageRank data hurts nobody, but might force many Webmasters to change their stuff in order to comply to Google’s laws. As a side effect, this procedure even helps to avoid too much collateral damage when the actual filters become active later on.

There seems to exist another pattern. Most sites targeted by the recent toolbar PageRank deductions are SEO aware to some degree. They will spread the word. And complain loudly. Google has quite a few folks on the payroll who monitor the blogosphere, SEO forums, Webmaster hangouts and whatnot. Analyzing such reactions is a great way to gather input usable to validate and fine tune not yet launched algos.

Of course that’s sheer speculation. What do you think, does Google use toolbar PR as a “change your stuff or find yourself kicked out soon” message? Or ist it just a try to make link selling less attractive?

Update Insightful posts on Google’s toolbar PageRank manipulations:

And here is a pragmatic answer to Google’s paid links requirements: Cloak the hell out of your links with commercial intent!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

BlogRush amoebas ban high quality blogs in favor of crap

Whilst blogs like The Tampon Blog are considered “high quality” by clueless amoebas hired by BlogRush, many great blogs like Tamar’s were banned by the Reeve gang.

In my book that qualifies BlogRush as a full blown scam. If it’s not a scam, it’s an amateurish operation intended to hoodwink bloggers at least. Hiring low-life surfers for 12 bucks per hour to judge the quality of blogs talking about topics the average assclown on BlogRush’s payroll cannot understand is ridiculous, if not a sign of criminal intent. Here is how they hire their amoebas:

We’re looking to hire a bunch of people that would like to earn some extra cash. If you or someone you know might be interested, please forward this message to them. This would be perfect for a stay-at-home mom, college student, or anyone else looking to make some extra money.

All that’s required is sitting in front of their computer and doing the following…

Login to our Review System with an account we will setup for them. There will be a top “frame” control strip that has a few buttons:

“Approve” “Reject” and “Not Sure.”

The bottom frame will automatically load a blog that needs to be reviewed. After reviewing the blog, just press the appropriate button. That’s it.

* We have created a little training video to teach reviewers what to look for and how to decide what gets approved or rejected. It’s very simple.

After pushing one of the buttons the next blog to be reviewed automatically loads in that bottom frame. It’s as simple as that.

Here’s The Deal…

We’re paying USD $12.00/hour for this review work. It’s not a fortune, but it’s a pretty simple task. Heck, just put on some music and sit back and review some blogs. Pretty easy work. :-)

I’m not pissed because they rejected me and lots of other great blogs. I’m not even pissed because they sent emails like

Congratulations! You are receiving this update because your blog has passed our strict Quality Guidelines and criteria — we believe you have a high-quality blog and we are happy you’re a member of our network!

to blogs which didn’t even bother to put up their crappy widget. I’m pissed because they constantly lie and cheat:

We’ve just completed a massive SWEEP of our entire network. We’ve removed over *10,000* blogs (Yes, ten thousand) that did not meet our new Quality Guidelines.

We have done a huge “quality control audit” of our network and have
reviewed all the blogs one-at-a-time. We will continue to review each
NEW blog that is ever submitted to our network.

You will notice the HUGE DIFFERENCE in the quality of blogs that now
appear in your widget. This major *sweep* of our network will also
increase the click-rates across the entire network and you will start
to receive more traffic.

They still do not send any|much traffic to niche blogs, they still get cheated, and they still have tons of crap in their network. They still overpromise and underdeliver. There’s no such thing as a “massive amount of targeted traffic” sent by BlogRush.

The whole BlogRush operation is a scam. Avoid BlogRush like the plague.

BlogRush's pile of crapUpdate: Here is one of John Reeve’s lame excuses, posted in reply to a “reviewed and dumped by BlogRush idiots” post on John Cow’s blog. A laughable pile of bullcrap, politely put.

John Reese from BlogRush here.

I am not sure why your blog wasn’t approved by the reviewer that reviewed your blog. (We have a team of reviewers.) From what I can tell, your blog passes our guidelines. I’m not sure if the reviewer loaded your blog on a day where your primary post(s) were heavy on the promotional side or not — that’s just a guess of what might have influenced them.

You have my email address from this comment. Please contact me directly (if you wish) and I will investigate the issue for you and see about reactivating your account.

AND FOR THE RECORD…

No one is being BANNED from BlogRush. If any account doesn’t have any approved blogs, the account is moved to an “inactive” status until changes are made or until another blog that meets our guidelines gets approved. Nothing happens to referrals or an account’s referral network; they are left completely intact and as soon as the account is “active” again everything returns to the way it was.

* I just found out that your pingback message was deleted by one of our blog moderators because we don’t want any comments (or pingbacks) showing up for that main post. A few childish users started posting profanity and other garbage that was getting past our filters and we needed to shut it off for now.

There’s no “conspiracy theory” happening. In fact, we’ve been incredibly transparent and honest ever since we launched — openly admitting to mistakes that we’ve made and what we planned to do about them.

~John



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

The anatomy of a server sided redirect: 301, 302 and 307 illuminated SEO wise

HTTP RedirectsWe find redirects on every Web site out there. They’re often performed unnoticed in the background, unintentionally messed up, implemented with a great deal of ignorance, but seldom perfect from a SEO perspective. Unfortunately, the Webmaster boards are flooded with contradictorily, misleading and plain false advice on redirects. If you for example read “for SEO purposes you must make use of 301 redirects only” then better close the browser window/tab to prevent you from crappy advice. A 302 or 307 redirect can be search engine friendly too.

With this post I do plan to bore you to death. So lean back, grab some popcorn, and stay tuned for a longish piece explaining the Interweb’s forwarding requests as dull as dust. Or, if you know everything about redirects, then please digg, sphinn and stumble this post before you surf away. Thanks.

Redirects are defined in the HTTP protocol, not in search engine guidelines

For the moment please forget everything you’ve heard about redirects and their SEO implications, clear your mind, and follow me to the very basics defined in the HTTP protocol. Of course search engines interpret some redirects in a non-standard way, but understanding the norm as well as its use and abuse is necessary to deal with server sided redirects. I don’t bother with outdated HTTP 1.0 stuff, although some search engines still apply it every once in a while, hence I’ll discuss the 307 redirect introduced in HTTP 1.1 too. For information on client sided redirects please refer to Meta Refresh - the poor man’s 301 redirect or read my other pamphlets on redirects, and stay away from JavaScript URL manipulations.

What is a server sided redirect?

Think about an HTTP redirect as a forwarding request. Although redirects work slightly different from snail mail forwarding requests, this analogy perfectly fits the procedure. Whilst with US Mail forwarding requests a clerk or postman writes the new address on the envelope before it bounces in front of a no longer valid respectively temporarily abandoned letter-box or pigeon hole, on the Web the request’s location (that is the Web server responding to the server name part of the URL) provides the requestor with the new location (absolute URL).

A server sided redirect tells the user agent (browser, Web robot, …) that it has to perform another request for the URL given in the HTTP header’s “location” line in order to fetch the requested contents. The type of the redirect (301, 302 or 307) also instructs the user agent how to perform future requests of the Web resource. Because search engine crawlers/indexers try to emulate human traffic with their content requests, it’s important to choose the right redirect type both for humans and robots. That does not mean that a 301-redirect is always the best choice, and it certainly does not mean that you always must return the same HTTP response code to crawlers and browsers. More on that later.

Execution of server sided redirects

Server sided redirects are executed before your server delivers any content. In other words, your server ignores everything it could deliver (be it a static HTML file, a script output, an image or whatever) when it runs into a redirect condition. Some redirects are done by the server itself (see handling incomplete URIs), and there are several places where you can set (conditional) redirect directives: Apache’s httpd.conf, .htaccess, or in application layers for example in PHP scripts. (If you suffer from IIS/ASP maladies, this post is for you.) Examples:

Browser Request: ww.site.com
/page.php?id=1
site.com
/page.php?id=1
www.site.com
/page.php?id=1
www.site.com
/page.php?id=2
Apache: 301 header:
www.site.com
/page.php?id=1
     
.htaccess:   301 header:
www.site.com
/page.php?id=1
   
/page.php:     301 header:
www.site.com
/page.php?id=2
200 header:
(Info like content length...)

Content:
Article #2

The 301 header may or may not be followed by a hyperlink pointing to the new location, solely added for user agents which can’t handle redirects. Besides that link, there’s no content sent to the client after the redirect header.

More important, you must not send a single byte to the client before the HTTP header. If you for example code [space(s)|tab|new-line|HTML code]<?php ... in a script that shall perform a redirect or is supposed to return a 404 header (or any HTTP header different from the server’s default instructions), you’ll produce a runtime error. The redirection fails, leaving the visitor with an ugly page full of cryptic error messages but no link to the new location.

That means in each and every page or script which possibly has to deal with the HTTP header, put the logic testing those conditions at the very top. Always send the header status code and optional further information like a new location to the client before you process the contents.

After the last redirect header line terminate execution with the “L” parameter in .htaccess, PHP’s exit; statement, or whatever.

What is an HTTP redirect header?

An HTTP redirect, regardless its type, consists of two lines in the HTTP header. In this example I’ve requested http://www.sebastians-pamphlets.com/about/, which is an invalid URI because my server name lacks the www-thingy, hence my canonicalization routine outputs this HTTP header:
HTTP/1.1 301 Moved Permanently
Date: Mon, 01 Oct 2007 17:45:55 GMT
Server: Apache/1.3.37 (Unix) PHP/4.4.4

Location: http://sebastians-pamphlets.com/about/
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=iso-8859-1

The redirect response code in a HTTP status line

The first line of the header defines the protocol version, the reponse code, and provides a human readable reason phrase. Here is a shortened and slightly modified excerpt quoted from the HTTP/1.1 protocol definition:

Status-Line

The first line of a Response message is the Status-Line, consisting of the protocol version followed by a numeric status code and its associated textual phrase, with each element separated by SP (space) characters. No CR or LF is allowed except in the final CRLF sequence.

Status-Line = HTTP-Version SP Status-Code SP Reason-Phrase CRLF
[e.g. “HTTP/1.1 301 Moved Permanently” + CRLF]

Status Code and Reason Phrase

The Status-Code element is a 3-digit integer result code of the attempt to understand and satisfy the request. […] The Reason-Phrase is intended to give a short textual description of the Status-Code. The Status-Code is intended for use by automata and the Reason-Phrase is intended for the human user. The client is not required to examine or display the Reason-Phrase.

The first digit of the Status-Code defines the class of response. The last two digits do not have any categorization role. […]:
[…]
- 3xx: Redirection - Further action must be taken in order to complete the request
[…]

The individual values of the numeric status codes defined for HTTP/1.1, and an example set of corresponding Reason-Phrases, are presented below. The reason phrases listed here are only recommendations — they MAY be replaced by local equivalents without affecting the protocol [that means you could translate and/or rephrase them].
[…]
300: Multiple Choices
301: Moved Permanently
302: Found [Elsewhere]
303: See Other
304: Not Modified
305: Use Proxy

307: Temporary Redirect
[…]

In terms of SEO the understanding of 301/302-redirects is important. 307-redirects, introduced with HTTP/1.1, are still capable to confuse some search engines, even major players like Google when Ms. Googlebot for some reasons thinks she must do HTTP/1.0 requests, usually caused by weird respectively ancient server configurations (or possibly testing newly discovered sites under certain circumstances). You should not perform 307 redirects as response to most HTTP/1.0 requests, use 302/301 –whatever fits best– instead. More info on this issue below in the 302/307 sections.

Please note that the default reponse code of all redirects is 302. That means when you send a HTTP header with a location directive but without an explicit response code, your server will return a 302-Found status line. That’s kinda crappy, because in most cases you want to avoid the 302 code like the plague. Do no nay never rely on default response codes! Always prepare a server sided redirect with a status line telling an actual response code (301, 302 or 307)! In server sided scripts (PHP, Perl, ColdFusion, JSP/Java, ASP/VB-Script…) always send a complete status line, and in .htaccess or httpd.conf add a [R=301|302|307,L] parameter to statements like RewriteRule:
RewriteRule (.*) http://www.site.com/$1 [R=301,L]

The redirect header’s “location” field

The next element you need in every redirect header is the location directive. Here is the official syntax:

Location

The Location response-header field is used to redirect the recipient to a location other than the Request-URI for completion of the request or identification of a new resource. […] For 3xx responses, the location SHOULD indicate the server’s preferred URI for automatic redirection to the resource. The field value consists of a single absolute URI.

Location = “Location” “:” absoluteURI [+ CRLF]

An example is:

Location: http://sebastians-pamphlets.com/about/

Redirect to absolute URLs onlyPlease note that the value of the location field must be an absolute URL, that is a fully qualified URL with scheme (http|https), server name (domain|subdomain), and path (directory/file name) plus the optional query string (”?” followed by variable/value pairs like ?id=1&page=2...), no longer than 2047 bytes (better 255 bytes because most scripts out there don’t process longer URLs for historical reasons). A relative URL like ../page.php might work in (X)HTML (although you better plan a spectacular suicide than any use of relative URIs!), but you must not use relative URLs in HTTP response headers!

How to implement a server sided redirect?

You can perform HTTP redirects with statements in your Web server’s configuration, and in server sided scripts, e.g. PHP or Perl. JavaScript is a client sided language and therefore lacks a mechanism to do HTTP redirects. That means all JS redirects count as a 302-Found response.

Bear in mind that when you redirect, you possibly leave tracks of outdated structures in your HTML code, not to speak of incoming links. You must change each and every internal link to the new location, as well as all external links you control or where you can ask for an URL update. If you leave any outdated links, visitors probably don’t spot it (although every redirect slows things down), but search engine spiders continue to follow them, what ends in redirect chains eventually. Chained redirects often are the cause of deindexing pages, site areas or even complete sites by search engines, hence do no more than one redirect in a row and consider two redirects in a row risky. You don’t control offsite redirects, in some cases a search engine has already counted one or two redirects before it requests your redirecting URL (caused by redirecting traffic counters etcetera). Always redirect to the final destination to avoid useless hops which kill your search engine traffic. (Google recommends “that you use fewer than five redirects for each request”, but don’t try to max out such limits because other services might be less BS-tolerant.)

Like conventional forwarding requests, redirects do expire. Even a permanent 301-redirect’s source URL will be requested by search engines every now and then because they can’t trust you. As long as there is one single link pointing to an outdated and redirecting URL out there, it’s not forgotten. It will stay alive in search engine indexes and address books of crawling engines even when the last link pointing to it was changed or removed. You can’t control that, and you can’t find all inbound links a search engine knows, despite their better reporting nowadays (neither Yahoo’s site explorer nor Google’s link stats show you all links!). That means you must maintain your redirects forever, and you must not remove (permanent) redirects. Maintenance of redirects includes hosting abandoned domains, and updates of location directives whenever you change the final structure. With each and every revamp that comes with URL changes check for incoming redirects and make sure that you eliminate unnecessary hops.

Often you’ve many choices where and how to implement a particular redirect. You can do it in scripts and even static HTML files, CMS software, or in the server configuration. There’s no such thing as a general best practice, just a few hints to bear in mind.

  • Redirects are dynamite, so blast carefullyDoubt: Don’t believe Web designers and developers when they say that a particular task can’t be done without redirects. Do your own research, or ask an SEO expert. When you for example plan to make a static site dynamic by pulling the contents from a database with PHP scripts, you don’t need to change your file extensions from *.html to *.php. Apache can parse .html files for PHP, just enable that in your root’s .htaccess:
    AddType application/x-httpd-php .html .htm .shtml .txt .rss .xml .css

    Then generate tiny PHP scripts calling the CMS to replace the outdated .html files. That’s not perfect but way better than URL changes, provided your developers can manage the outdated links in the CMS’ navigation. Another pretty popular abuse of redirects is click tracking. You don’t need a redirect script to count clicks in your database, make use of the onclick event instead.
  • Transparency: When the shit hits the fan and you need to track down a redirect with not more than the HTTP header’s information in your hands, you’ll begin to believe that performance and elegant coding is not everything. Reading and understanding a large httpd.conf file, several complex .htaccess files, and searching redirect routines in a conglomerate of a couple generations of scripts and include files is not exactly fun. You could add a custom field identifying the piece of redirecting code to the HTTP header. In .htaccess that would be achieved with
    Header add X-Redirect-Src "/content/img/.htaccess"

    and in PHP with
    header("X-Redirect-Src: /scripts/inc/header.php", TRUE);

    (Whether or not you should encode or at least obfuscate code locations in headers depends on your security requirements.)
  • Encapsulation: When you must implement redirects in more than one script or include file, then encapsulate all redirects including all the logic (redirect conditions, determining new locations, …). You can do that in an include file with a meaningful file name for example. Also, instead of plastering the root’s .htaccess file with tons of directory/file specific redirect statements, you can gather all requests for redirect candidates and call a script which tests the REQUEST_URI to execute the suitable redirect. In .htaccess put something like:
    RewriteEngine On
    RewriteBase /old-stuff
    RewriteRule ^(.*)\.html$ do-redirects.php

    This code calls /old-stuff/do-redirects.php for each request of an .html file in /old-stuff/. The PHP script:
    $requestUri = $_SERVER["REQUEST_URI"];
    if (stristr($requestUri, "/contact.html")) {
    $location = "http://example.com/new-stuff/contact.htm";
    }
    ...
    if ($location) {
    @header("HTTP/1.1 301 Moved Permanently", TRUE, 301);
    @header("X-Redirect-Src: /old-stuff/do-redirects.php", TRUE);
    @header("Location: $location");
    exit;
    }
    else {
    [output the requested file or whatever]
    }

    (This is also an example of a redirect include file which you could insert at the top of a header.php include or so. In fact, you can include this script in some files and call it from .htaccess without modifications.) This method will not work with ASP on IIS because amateurish wannabe Web servers don’t provide the REQUEST_URI variable.
  • Documentation: When you design or update an information architecture, your documentation should contain a redirect chapter. Also comment all redirects in the source code (your genial regular expressions might lack readability when someone else looks at your code). It’s a good idea to have a documentation file explaining all redirects on the Web server (you might work with other developers when you change your site’s underlying technology in a few years).
  • Maintenance: Debugging legacy code is a nightmare. And yes, what you write today becomes legacy code in a few years. Thus keep it simple and stupid, implement redirects transparent rather than elegant, and don’t forget that you must change your ancient redirects when you revamp a site area which is the target of redirects.
  • Performance: Even when performance is an issue, you can’t do everything in httpd.conf. When you for example move a large site changing the URL structure, the redirect logic becomes too complex in most cases. You can’t do database lookups and stuff like that in server configuration files. However, some redirects like for example server name canonicalization should be performed there, because they’re simple and not likely to change. If you can’t change httpd.conf, .htaccess files are for you. They’re are slower than cached config files but still faster than application scripts.

Redirects in server configuration files

Here is an example of a canonicalization redirect in the root’s .htaccess file:
RewriteEngine On
RewriteCond %{HTTP_HOST} !^sebastians-pamphlets\.com [NC]
RewriteRule (.*) http://sebastians-pamphlets.com/$1 [R=301,L]

  1. The first line enables Apache’s mod_rewrite module. Make sure it’s available on your box before you copy, paste and modify the code above.
  2. The second line checks the server name in the HTTP request header (received from a browser, robot, …). The “NC” parameter ensures that the test of the server name (which is, like the scheme part of the URI, not case sensitive by definition) is done as intended. Without this parameter a request of http://SEBASTIANS-PAMPHLETS.COM/ would run in an unnecessary redirect. The rewrite condition returns TRUE when the server name is not sebastians-pamphlets.com. There’s an important detail: not “!”

    Most Webmasters do it the other way round. They check if the server name equals an unwanted server name, for example with RewriteCond %{HTTP_HOST} ^www\.example\.com [NC]. That’s not exactly efficient, and fault-prone. It’s not efficient because one needs to add a rewrite condition for each and every server name a user could type in and the Web server would respond to. On most machines that’s a huge list like “w.example.com, ww.example.com, w-w-w.example.com, …” because the default server configuration catches all not explicitely defined subdomains.

    Of course next to nobody puts that many rewrite conditions into the .htaccess file, hence this method is fault-prone and not suitable to fix canonicalization issues. In combination with thoughtlessly usage of relative links (bullcrap that most designers and developers love out of lazyness and lack of creativity or at least fantasy), one single link to an existing page on a non-exisiting subdomain not redirected in such an .htaccess file could result in search engines crawling and possibly even indexing a complete site under the unwanted server name. When a savvy competitor spots this exploit you can say good bye to a fair amount of your search engine traffic.

    Another advantage of my single line of code is that you can point all domains you’ve registered to catch type-in traffic or whatever to the same Web space. Every new domain runs into the canonicalization redirect, 100% error-free.

  3. The third line performs the 301 redirect to the requested URI using the canonical server name. That means when the request URI was http://www.sebastians-pamphlets.com/about/, the user agent gets redirected to http://sebastians-pamphlets.com/about/. The “R” parameter sets the reponse code, and the “L” parameter means leave if the|one condition matches (=exit), that is the statements following the redirect execution, like other rewrite rules and such stuff, will not be parsed.

If you’ve access to your server’s httpd.conf file (what most hosting services don’t allow), then better do such redirects there. The reason for this recommendation is that Apache must look for .htaccess directives in the current directory and all its upper levels for each and every requested file. If the request is for a page with lots of embedded images or other objects, that sums up to hundreds of hard disk accesses slowing down the page loading time. The server configuration on the other hand is cached and therefore way faster. Learn more about .htaccess disadvantages. However, since most Webmasters can’t modify their server configuration, I provide .htaccess examples only. If you can do, then you know how to put it in httpd.conf. ;)

Redirecting directories and files with .htaccess

When you need to redirect chunks of static pages to another location, the easiest way to do that is Apache’s redirect directive. The basic syntax is Redirect [301|302|307] Path URL, e.g. Redirect 307 /blog/feed http://feedburner.com/myfeed or Redirect 301 /contact.htm /blog/contact/. Path is always a file system path relative to the Web space’s root. URL is either a fully qualified URL (on another machine) like http://feedburner.com/myfeed, or a relative URL on the same server like /blog/contact/ (Apache adds scheme and server in this case, so that the HTTP header is build with an absolute URL in the location field; however, omitting the scheme+server part of the target URL is not recommended, see the warning below).

When you for example want to consolidate a blog on its own subdomain and a corporate Web site at example.com, then put
Redirect 301 / http://example.com/blog

in the .htacces file of blog.example.com. When you then request http://blog.example.com/category/post.html you’re redirected to http://example.com/blog/category/post.html.

Say you’ve moved your product pages from /products/*.htm to /shop/products/*.htm then put
Redirect 301 /products http://example.com/shop/products

Omit the trailing slashes when you redirect directories. To redirect particular files on the other hand you must fully qualify the locations:
Redirect 302 /misc/contact.html http://example.com/cms/contact.php

or, when the new location resides on the same server:
Redirect 301 /misc/contact.html /cms/contact.php

Warning: Although Apache allows local redirects like Redirect 301 /misc/contact.html /cms/contact.php, with some server configurations this will result in 500 server errors on all requests. Therefore I recommend the use of fully qualified URLs as redirect target, e.g. Redirect 301 /misc/contact.html http://example.com/cms/contact.php!

Maybe you found a reliable and unbeatable cheap hosting service to host your images. Copy all image files from example.com to image-example.com and keep the directory structures as well as all file names. Then add to example.com’s .htaccess
RedirectMatch 301 (.*)\.([Gg][Ii][Ff]|[Pp][Nn][Gg]|[Jj][Pp][Gg])$ http://www.image-example.com$1.$2

The regex should match e.g. /img/nav/arrow-left.png so that the user agent is forced to request http://www.image-example.com/img/nav/arrow-left.png. Say you’ve converted your GIFs and JPGs to the PNG format during this move, simply change the redirect statement to
RedirectMatch 301 (.*)\.([Gg][Ii][Ff]|[Pp][Nn][Gg]|[Jj][Pp][Gg])$ http://www.image-example.com$1.png

With regular expressions and RedirectMatch you can perform very creative redirects.

Please note that the response codes used in the code examples above most probably do not fit the type of redirect you’d do in real life with similar scenarios. I’ll discuss use cases for all redirect response codes (301|302|307) later on.

Redirects in server sided scripts

You can do HTTP redirects only with server sided programming languages like PHP, ASP, Perl etcetera. Scripts in those languages generate the output before anything is send to the user agent. It should be a no-brainer, but these PHP examples don’t count as server sided redirects:
print "<META HTTP-EQUIV=Refresh CONTENT="0; URL=http://example.com/">\n";
print "<script type="text/javascript">window.location = "http://example.com/";</script>\n";

Just because you can output a redirect with a server sided language that does not make the redirect an HTTP redirect. ;)

In PHP you perform HTTP redirects with the header() function:
$newLocation = "http://example.com/";
@header("HTTP/1.1 301 Moved Permanently", TRUE, 301);
@header("Location: $newLocation");
exit;

The first input parameter of header() is the complete header line, in the first line of code above that’s the status-line. The second parameter tells whether a previously sent header line shall be replaced (default behavior) or not. The third parameter sets the HTTP status code, don’t use it more than once. If you use an ancient PHP version (prior 4.3.0) you can’t put the 2nd and 3rd input parameter. The “@” suppresses PHP warnings and error messages.

With ColdFusion you code
<CFHEADER statuscode="307" statustext="Temporary Redirect">
<CFHEADER name="Location" value="http://example.com/">

A redirecting Perl script begins with
#!/usr/bin/perl -w
use strict;
print "Status: 302 Found Elsewhere\r\n", "Location: http://example.com/\r\n\r\n";
exit;

Even with ASP you can do server sided redirects. VBScript:
Dim newLocation
newLocation = "http://example.com/"
Response.Status = "301 Moved Permanently"
Response.AddHeader "Location", newLocation
Response.End

JScript:
Function RedirectPermanent(newLocation) {
Response.Clear();
Response.Status = 301;
Response.AddHeader("Location", newLocation);
Response.Flush();
Response.End();
}
...
Response.Buffer = TRUE;
...
RedirectPermanent ("http://example.com/");

Again, if you suffer from IIS/ASP maladies: here you go.

Remember: Don’t output anything before the redirect header, and nothing after the redirect header!

Redirects done by the Web server itself

When you read your raw server logs, you’ll find a few 302 and/or 301 redirects Apache has performed without an explicit redirect statement in the server configuration, .htaccess, or a script. Most of these automatic redirects are the result of a very popular bullshit practice: removing trailing slashes. Although the standard defines that an URI like /directory is not a file name by default, therefore equals /directory/ if there’s no file named /directory, choosing the version without the trailing slash is lazy at least, and creates lots of troubles (404s in some cases, otherwise external redirects, but always duplicate content issues you should fix with URL canonicalization routines).

For example Yahoo is a big fan of truncated URLs. They might save a few terabytes in their indexes by storing URLs without the trailing slash, but they send every user’s browser twice to those locations. Web servers must do a 302 or 301 redirect on each Yahoo-referrer requesting a directory or pseudo-directory, because they can’t serve the default document of an omitted path segment (the path component of an URI begins with a slash, the slash is its segment delimiter, and a trailing slash stands for the last (or only) segment representing a default document like index.html). From the Web server’s perspective /directory does not equal /directory/, only /directory/ addresses /directory/index.(htm|html|shtml|php|...), whereby the file name of the default document must be omitted (among other things to preserve the URL structure when the underlying technology changes). Also, the requested URI without its trailing slash may address a file or an on the fly output (if you make use of mod_rewrite to mask ugly URLs you better test what happens with screwed URIs of yours).

Yahoo wastes even their own resources. Their crawler persistently requests the shortened URL, what bounces with a redirect to the canonical URL. Here is an example from my raw logs:
74.6.20.165 - - [05/Oct/2007:01:13:04 -0400] "GET /directory HTTP/1.0″ 301 26 “-” “Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)”
74.6.20.165 - - [05/Oct/2007:01:13:06 -0400] “GET /directory/ HTTP/1.0″ 200 8642 “-” “Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)”
[I’ve replaced a rather long path with “directory”]

If you persistently redirect Yahoo to the canonical URLs (with trailing slash), they’ll use your canonical URLs on the SERPs eventually (but their crawler still requests Yahoo-generated crap). Having many good inbound links as well as clean internal links –all with the trailing slash– helps too, but is not a guarantee for canonical URL normalization at Yahoo.

Here is an example. This URL responds with 200-OK, regardless whether it’s requested with or without the canonical trailing slash:
http://www.jlh-design.com/2007/06/im-confused/
(That’s the default (mis)behavior of everybody’s darling with permalinks by the way. Here is some PHP canonicalization code to fix this flaw.) All internal links use the canonical URL. I didn’t find a serious inbound link pointing to a truncated version of this URL. Yahoo’s Site Explorer lists the URL without the trailing slash: […]/im-confused, and the same happens on Yahoo’s SERPs: […]/im-confused. Even when a server responds 200-OK to two different URLs, a serious search engine should normalize according to the internal links as well as an entry in the XML sitemap, therefore choose the URL with the trailing slash as canonical URL.

Fucking up links on search result pages is evil enough, although fortunately this crap doesn’t influence discovery crawling directly because those aren’t crawled by other search engines (but scraped or syndicated search results are crawlable). Actually, that’s not the whole horror story. Other Yahoo properties remove the trailing slashes from directory and home page links too (look at the “What Readers Viewed” column in your MBL stats for example), and some of those services provide crawlable pages carrying invalid links (pulled from the search index or screwed otherwise). That means other search engines pick those incomplete URLs from Yahoo’s pages (or other pages with links copied from Yahoo pages), crawl them, and end up with search indexes blown up with duplicate content. Maybe Yahoo does all that only to burn Google’s resources by keeping their canonicalization routines and duplicate content filters busy, but it’s not exactly gentlemanlike that such cat fights affect all Webmasters across the globe. Yahoo directly as well as indirectly burns our resources with unnecessary requests of screwed URLs, and we must implement sanitizing redirects for software like WordPress –which doesn’t care enough about URL canonicalization–, just because Yahoo manipulates our URLs to peeve Google. Doh!

If somebody from Yahoo (or MSN, or any other site manipulating URLs this way) reads my rant, I highly recommend this quote from Tim Berners-Lee (January 2005):

Scheme-Based Normalization
[…] the following […] URIs are equivalent:
http://example.com
http://example.com/
In general, an URI that uses the generic syntax for authority with an empty path should be normalized to a path of “/”.
[…]
Normalization should not remove delimiters [”/” or “?”] when their associated component is empty unless licensed to do so by the scheme specification. [emphasis mine]

In my book sentences like “Note that the absolute path cannot be empty; if none is present in the original URI, it MUST be given as ‘/’ […]” in the HTTP specification as well as Section 3.3 of the URI’s Path Segment specs do not sound like a licence to screw URLs. Omitting the path segment delimiter “/” representing an empty last path segment might sound legal if the specs are interpreted without applying common sense, but knowing that Web servers can’t respond to requests of those incomplete URIs and nevertheless truncating trailing slashes is a brain dead approach (actually, such crap deserves a couple unprintable adjectives).

Frequently scanning the raw logs for 302/301 redirects is a good idea. Also, implement documented canonicalization redirects when a piece of software responds to different versions of URLs. It’s the Webmaster’s responsibility to ensure that each piece of content is available under one and only one URL. You cannot rely on any search engine’s URL canonicalization, because shit happens, even with high sophisticated algos:

When search engines crawl identical content through varied URLs, there may be several negative effects:

1. Having multiple URLs can dilute link popularity. For example, in the diagram above [example in Google’s blog post], rather than 50 links to your intended display URL, the 50 links may be divided three ways among the three distinct URLs.

2. Search results may display user-unfriendly URLs […]

Redirect or not? A few use cases.

Before I blather about the three redirect response codes you can choose from, I’d like to talk about a few situations where you shall not redirect, and cases where you probably don’t redirect but should do so.

Unfortunately, it’s a common practice to replace various sorts of clean links with redirects. Whilst legions of Webmasters don’t obfuscate their affiliate links, they hide their valuable outgoing links in fear of PageRank leaks and other myths, or react to search engine FUD with castrated links.

With very few exceptions, the A Element a.k.a. Hyperlink is the best method to transport link juice (PageRank, topical relevancy, trust, reputation …) as well as human traffic. Don’t abuse my beloved A Element:
<a onclick="window.location = 'http://example.com/'; return false;" title="http://example.com">bad example</a>

Such a “link” will transport some visitors, but does not work when JavaScript is disabled or the user agent is a Web robot. This “link” is not an iota better:
<a href="http://example.com/blocked-directory/redirect.php?url=http://another-example.com/" title="Another bad example">example</a>

Simplicity pays. You don’t need the complexity of HREF values changed to ugly URLs of redirect scripts with parameters, located in an uncrawlable path, just because you don’t want that search engines count the links. Not to speak of cases where redirecting links is unfair or even risky, for example click tracking scripts which do a redirect.

  • If you need to track outgoing traffic, then by all means do it in a search engine friendly way with clean URLs which benefit the link destination and don’t do you any harm, here is a proven method.
  • If you really can’t vouch for a link, for example because you link out to a so called bad neighborhood (whatever that means), or to a link broker, or to someone who paid for the link and Google can detect it or a competitor can turn you in, then add rel=”nofollow” to the link. Yeah, rel-nofollow is crap … but it’s there, it works, we won’t get something better, and it’s less complex than redirects, so just apply it to your fishy links as well as to unmoderated user input.
  • If you decide that an outgoing link adds value for your visitors, and you personally think that the linked page is a great resource, then almost certainly search engines will endorse the link (regardless whether it shows a toolbar PR or not). There’s way too much FUD and crappy advice out there.
  • You really don’t lose PageRank when you link out. Honestly gained PageRanks sticks at your pages. You only lower the amount of PageRank you can pass to your internal links a little. That’s not a bad thing, because linking out to great stuff can bring in more PageRank in the form of natural inbound links (there are other advantages too). Also, Google dislikes PageRank hoarding and the unnatural link patterns you create with practices like that.
  • Every redirect slows things down, and chances are that a user agent messes with the redirect what can result in rendering nil, scrambled stuff, or something completely unrelated. I admit that’s not a very common problem, but it happens with some outdated though still used browsers. Avoid redirects where you can.

In some cases you should perform redirects for sheer search engine compliance, in other words selfish SEO purposes. For example don’t let search engines handle your affiliate links.

  • If you operate an affiliate program, then internally redirect all incoming affiliate links to consolidate your landing page URLs. Although incoming affiliate links don’t bring much link juice, every little helps when it lands on a page which doesn’t credit search engine traffic to an affiliate.
  • Search engines are pretty smart when it comes to identifying affiliate links. (Thin) affiliate sites suffer from decreasing search engine traffic. Fortunately, the engines respect robots.txt, that means they usually don’t follow links via blocked subdirectories. When you link to your merchants within the content, using URLs that don’t smell like affiliate links, it’s harder to detect the intention of those links algorithmically. Of course that doesn’t prevent you from smart algos trained to spot other patterns, and this method will not pass reviews by humans, but it’s worth a try.
  • If you’ve pages which change their contents often by featuring for example a product of the day, you might have a redirect candidate. Instead of duplicating a daily changing product page, you can do a dynamic soft redirect to the product pages. Whether a 302 or a 307 redirect is the best choice depends on the individual circumstances. However, you can promote the hell out of the redirecting page, so that it gains all the search engine love without passing on PageRank etc. to product pages which phase out after a while. (If the product page is hosted by the merchant you must use a 307 response code. Otherwise make sure the 302′ing URL ist listed in your XML sitemap with a high priority. If you can, send a 302 with most HTTP/1.0 requests, and a 307 responding to HTTP/1.1 requests. See the 302/307 sections for more information.)
  • If an URL comes with a session-ID or another tracking variable in its query string, you must 301-redirect search engine crawlers to an URI without such randomly generated noise. There’s no need to redirect a human visitor, but search engines hate tracking variables so just don’t let them fetch such URLs.
  • There are other use cases involving creative redirects which I’m not willing to discuss here.

Of course both lists above aren’t complete.

Choosing the best redirect response code (301, 302, or 307)

Choosing a redirect response codeI’m sick of articles like “search engine friendly 301 redirects” propagating that only permanant redirects work with search engines. That’s a lie. I read those misleading headlines daily on the webmaster boards, in my feed reader, at Sphinn, and elsewhere … and I’m not amused. Lemmings. Amateurish copycats. Clueless plagiarists. [Insert a few lines of somewhat offensive language and swearing ;) ]

Of course most redirects out there return the wrong response code. That’s because the default HTTP response code for all redirects is 302, and many code monkeys forget to send a status-line providing the 301 Moved Permanantly when an URL was actually moved or the requested URI is not the canonical URL. When a clueless coder or hosting service invokes a Location: http://example.com/ header statement without a previous HTTP/1.1 301 Moved Permanantly status-line, the redirect becomes a soft 302 Found. That does not mean that 302 or 307 redirects aren’t search engine friendly at all. All HTTP redirects can be safely used with regard to search engines. The point is that one must choose the correct response code based on the actual circumstances and goals. Blindly 301′ing everything is counterproductive sometimes.

301 - Moved Permanently

301 Moved PermanentlyThe message of a 301 reponse code to the requestor is: “The requested URI has vanished. It’s gone forever and perhaps it never existed. I will never supply any contents under this URI (again). Request the URL given in location, and replace the outdated respectively wrong URL in your bookmarks/records by the new one for future requests. Don’t bother me again. Farewell.”

Lets start with the definition of a 301 redirect quoted from the HTTP/1.1 specifications:

The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs [(1)]. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise.

The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). […]

Read a polite “SHOULD” as “must”.

(1) Although technically you could provide more than one location, you must not do that because it irritates too many user agents, search engine crawlers included.

Make use of the 301 redirect when a requested Web resource was moved to another location, or when a user agent requests an URI which is definitely wrong and you’re able to tell the correct URI with no doubt. For URL canonicalization purposes (more info here) the 301 redirect is your one and only friend.

You must not recycle any 301′ing URLs, that means once an URL responds with 301 you must stick with it, you can’t reuse this URL for other purposes next year or so.

Also, you must maintain the 301 response and a location corresponding to the redirecting URL forever. That does not mean that the location can’t be changed. Say you’ve moved a contact page /contact.html to a CMS where it resides under /cms/contact.php. If a user agent requests /contact.html it does a 301 redirect pointing to /cms/contact.php. Two years later you change your software again, and the contact page moves to /blog/contact/. In this case you must change the initial redirect, and create a new one:
/contact.html 301-redirects to /blog/contact/, and
/cms/contact.php 301-redirects to /blog/contact/.
If you keep the initial redirect /contact.html to /cms/contact.php, and redirect /cms/contact.php to /blog/contact/, you create a redirect chain which can deindex your content at search engines. Well, two redirects before a crawler reaches the final URL shouldn’t be a big deal, but add a canonicalization redirect fixing a www vs. non-www issue to the chain, and imagine a crawler comes from a directory or links list which counts clicks with a redirect script, you’ve four redirects in a row. That’s too much, most probably all search engines will not index such an unreliable Web resource.

301 redirects transfer search engine love like PageRank gathered by the redirecting URL to the new location, but the search engines keep the old URL in their indexes, and revisit it every now and then to check whether the 301 redirect is stable or not. If the redirect is gone on the next crawl, the new URL loses the reputation earned from the redirect’s inbound links. It’s impossible to get all inbound links changed, hence don’t delete redirects after a move.

It’s a good idea to check your 404 logs weekly or so, because search engine crawlers pick up malformed links from URL drops and such. Even when the link is invalid, for example because a crappy forum software has shortened the URL, it’s an asset you should not waste with a 404 or even 410 response. Find the best matching existing URL and do a 301 redirect.

Here is what Google says about 301 redirects:

[Source] 301 (Moved permanently) […] You should use this code to let Googlebot know that a page or site has permanently moved to a new location. […]

[Source …] If you’ve restructured your site, use 301 redirects (”RedirectPermanent”) in your .htaccess file to smartly redirect users, Googlebot, and other spiders. (In Apache, you can do this with an .htaccess file; in IIS, you can do this through the administrative console.) […]

[Source …] If your old URLs redirect to your new site using HTTP 301 (permanent) redirects, our crawler will discover the new URLs. […] Google listings are based in part on our ability to find you from links on other sites. To preserve your rank, you’ll want to tell others who link to you of your change of address. […]

[Source …] If your site [or page] is appearing as two different listings in our search results, we suggest consolidating these listings so we can more accurately determine your site’s [page’s] PageRank. The easiest way to do so [on site level] is to set the preferred domain using our webmaster tools. You can also redirect one version [page] to the other [canonical URL] using a 301 redirect. This should resolve the situation after our crawler discovers the change. […]

That’s exactly what the HTTP standard wants a search engine to do. Yahoo handles 301 redirects a little different:

[Source …] When one web page redirects to another web page, Yahoo! Web Search sometimes indexes the page content under the URL of the entry or “source” page, and sometimes index it under the URL of the final, destination, or “target” page. […]

When a page in one domain redirects to a page in another domain, Yahoo! records the “target” URL. […]

When a top-level page [http://example.com/] in a domain presents a permanent redirect to a page deep within the same domain, Yahoo! indexes the “source” URL. […]

When a page deep within a domain presents a permanent redirect to a page deep within the same domain, Yahoo! indexes the “target” URL. […]

Because of mapping algorithms directing content extraction, Yahoo! Web Search is not always able to discard URLs that have been seen as 301s, so web servers might still see crawler traffic to the pages that have been permanently redirected. […]

As for the non-standard procedure to handle redirecting root index pages, that’s not a big deal, because in most cases a site owner promotes the top level page anyway. Actually, that’s a smart way to “break the rules” for the better. The way too many requests of permanently redirecting pages are more annoying.

Moving sites with 301 redirects

When you restructure a site, consolidate sites or separate sections, move to another domain, flee from a free host, or do other structural changes, then in theory you can install page by page 301 redirects and you’re done. Actually, that works but comes with disadvantages like a total loss of all search engine traffic for a while. As larger the site, as longer the while. With a large site highly dependent on SERP referrers this procedure can be the first phase of a filing for bankruptcy plan, because all search engines don’t send (much) traffic during the move.

Lets look at the process from a search engine’s perspective. The crawling of old.com all of a sudden bounces at 301 redirects to new.com. None of the redirect targets is known to the search engine. The crawlers report back redirect responses and the new URLs as well. The indexers spotting the redirects block the redirecting URLs for the query engine, but can’t pass the properties (PageRank, contextual signals and so on) of the redirecting resources to the new URLs, because those aren’t crawled yet.

The crawl scheduler initiates the handshake with the newly discovered server to estimate its robustness, and most propably does a conservative guess of the crawl frequency this server can sustain. The queue of uncrawled URLs belonging to the new server grows way faster than the crawlers actually deliver the first contents fetched from the new server.

Each and every URL fetched from the old server vanishes from the SERPs in no time, whilst the new URLs aren’t crawled yet, or are still waiting for an idle indexer able to assign them the properties of the old URLs, doing heuristic checks on the stored contents from both URLs and whatnot.

Slowly, sometimes weeks after the begin of the move, the first URLs from the new server populate the SERPs. They don’t rank very well, because the search engine has not yet discovered the new site’s structure and linkage completely, so that a couple of ranking factors stay temporairily unconsidered. Some of the new URLs may appear as URL-only listing, solely indexed based on off-page factors, hence lacking the ability to trigger search query relevance for their contents.

Many of the new URLs can’t regain their former PageRank in the first reindexing cycle, because without a complete survey of the “new” site’s linkage there’s only the PageRank from external inbound links passed by the redirects available (internal links no longer count for PageRank when the search engine discovers that the source of internally distributed PageRank does a redirect), so that they land in a secondary index.

Next, the suddenly lower PageRank results in a lower crawling frequency for the URLs in question. Also, the process removing redirecting URLs still runs way faster than the reindexing of moved contents from the new server. As more URLs are involved in a move, as longer the reindexing and reranking lasts. Replace Google’s very own PageRank with any term and you’ve a somewhat usable description of a site move handled by Yahoo, MSN, or Ask. There are only so many ways to handle such a challenge.

That’s a horror scenario, isn’t it? Well, at Google the recently changed infrastructure has greatly improved this process, and other search engines evolve too, but moves as well as significant structural changes will always result in periods of decreased SERP referrers, or even no search engine traffic at all.

Does that mean that big moves are too risky, or even not doable? Not at all. You just need deep pockets. If you lack a budget to feed the site with PPC or other bought traffic to compensate an estimated loss of organic traffic lasting at least a few weeks, but perhaps months, then don’t move. And when you move, then set up a professionally managed project, and hire experts for this task.

Here are some guidelines. I don’t provide a timeline, because that’s impossible without detailed knowledge of the individual circumstances. Adapt the procedure to fit your needs, nothing’s set in stone.

  • Set up the site on the new Web server (new.com). In robots.txt block everything exept a temporary page telling that this server is the new home of your site. Link to this page to get search engines familiar with the new server, but make sure there are no links to blocked content yet.
  • Create mapping tables “old URL to new URL” (respectively algos) to prepare the 301 redirects etcetera. You could consolidate multiple pages under one redirect target and so on, but you better wait with changes like that. Do them after the move. When you keep the old site’s structure on the new server, you make the job easier for search engines.
  • If you plan to do structural changes after the move, then develop the redirects in a way that you can easily change the redirect targets on the old site, and prepare the internal redirects on the new site as well. In any case, your redirect routines must be able to redirect or not depending on parameters like site area, user agent / requestor IP and such stuff, and you need a flexible control panel as well as URL specific crawler auditing on both servers.
  • On old.com develop a server sided procedure which can add links to the new location on every page on your old domain. Identify your URLs with the lowest crawling frequency. Work out a time table for the move which considers page importance (with regard search engine traffic), and crawl frequency.
  • Remove the Disallow: statements in the new server’s robots.txt. Create one or more XML sitemap(s) for the new server and make sure that you set crawl-priority and change-frequency accurately, last-modified gets populated with the scheduled begin of the move (IOW the day the first search engine crawler can access the sitemap). Feed the engines with sitemap files listing the important URLs first. Add sitemap-autodiscovery statements to robots.txt, and manually submit the sitemaps to Google and Yahoo.
  • Fire up the scripts creating visible “this page will move to [new location] soon” links on the old pages. Monitor the crawlers on the new server. Don’t worry about duplicate content issues in this phase, “move” in the anchor text is a magic word. Do nothing until the crawlers have fetched at least the first and second link level on the new server, as well as most of the important pages.
  • Briefly explain your redirect strategy in robots.txt comments on both servers. If you can, add obversely HTML comments to the HEAD section of all pages on the old server. You will cloak for a while, and things like that can help to pass reviews by humans which might get an alert from an algo or spam report. It’s more or less impossible to redirect human traffic in chunks, because that results in annoying surfing experiences, inconsistent database updates, and other disadvantages. Search engines aren’t cruel and understand that.
  • 301 redirect all human traffic to the new server. Serve search engines the first chunk of redirecting pages. Start with a small chunk of not more than 1,000 pages or so, and bundle related pages to preserve most of the internal links within each chunk.
  • Closely monitor the crawling and indexing process of the first chunk, and don’t release the next one before it has (nearly) finished. Probably it’s necessary to handle each crawler individually.
  • Whilst you release chunk after chunk of redirects to the engines adjusting the intervals based on your experiences, contact all sites linking to you and ask for URL updates (bear in mind to delay these requests for inbound links pointing to URLs you’ll change after the move for other reasons). It helps when you offer an incentive, best let your marketing dept. handle this task (having a valid reason to get in touch with those Webmasters might open some opportunities).
  • Support the discovery crawling based on redirects and updated inbound links by releasing more and more XML sitemaps on the new server. Enabling sitemap based crawling should somewhat correlate to your release of redirect chunks. Both discovery crawling and submission based crawling share the bandwith respectively the amount of daily fetches the crawling engine has determined for your new server. Hence don’t disturb the balance by submitting sitemaps listing 200,000 unimportant 5th level URLs whilst a crawler processes a chunk of landing pages promoting your best selling products. You can steer sitemap autodiscovery depending on the user agent (for MSN and Ask which don’t offer submit forms) in your robots.txt, in combination with submissions to Google and Yahoo. Don’t forget to maintain (delete or update frequently) the sitemaps after the move.
  • Make sure you can control your redirects forever. Pay the hosting service and the registrar of the old site for the next ten years upfront. ;)

Of course there’s no such thing as a bullet-proof procedure to move large sites, but you can do a lot to make the move as smoothly as possible.

302 - Found [Elsewhere]

302 Found ElsewhereThe 302 redirect, like the 303/307 response code, is kinda soft redirect. Whilst a 301-redirect indicates a hard redirect by telling the user agent that a requested address is outdated (should be deleted) and the resource must be requested under another URL, 302 (303/307) redirects can be used with URLs which are valid, and should be kept by the requestor, but don’t deliver content at the time of the request. In theory, a 302′ing URL could redirect to another URL with each and every request, and even serve contents itself every now and then.

Whilst that’s no big deal with user agents used by humans (browsers, screen readers), search engines crawling and indexing contents by following paths to contents which must be accessible for human surfers consider soft redirects unreliable by design. What makes indexing soft redirets a royal PITA is the fact that most soft redirects actually are meant to notify a permanent move. 302 is the default response code for all redirects, setting the correct status code is not exactly popular in developer crowds, so that gazillions of 302 redirects are syntax errors which mimic 301 redirects.

Search engines have no other chance than requesting those wrongly redirecting URLs over and over to persistently check whether the soft redirect’s functionality sticks with the implied behavior of a permanent redirect.

Also, way back when search engines interpreted soft redirects according to the HTTP standards, it was possible to hijack foreign resources with a 302 redirect and even meta refreshes. That means that a strong (high PageRank) URL 302-redirecting to a weaker (lower PageRank) URL on another server got listed on the SERPs with the contents pulled from the weak page. Since Internet marketers are smart folks, this behavior enabled creative content delivery: of course only crawlers saw the redirect, humans got a nice sales pitch.

With regard to search engines, 302 redirects should be applied very carefully, because ignorant developers and, well, questionable intentions, have forced the engines to handle 302 redirects in a way that’s not exactly compliant to Web standards, but meant to be the best procedure to fit a searchers interests. When you do cross-domain 302s, you can’t predict whether search engines pick the source, the target, or even a completely different but nice looking URL from the target domain on their SERPs. In most cases the target URL of 302-redirects gets indexed, but according to Murphy’s law and experience of life “99%” leaves enough room for serious messups.

Partly the common 302-confusion is based on the HTTP standard(s). With regard to SEO, response codes usable with GET and HEAD requests are more important, so I simplify things by ignoring issues with POST requests. Lets compare the definitions:

HTTP/1.0 HTTP/1.1
302 Moved Temporarily

The requested resource resides temporarily under a different URL. Since the redirection may be altered on occasion, the client should continue to use the Request-URI for future requests.

The URL must be given by the Location field in the response. Unless it was a HEAD request, the Entity-Body of the response should contain a short note with a hyperlink to the new URI(s).

302 Found

The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.

The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).

First, there’s a changed reason phrase for the 302 response code. “Moved Temporarily” became “Found” (”Found Elsewhere”), and a new response code 307 labelled “Temporary Redirect” was introduced (the other new response code 303 “See Other” is for POST results redirecting to a resource which requires a GET request).

Creatively interpreted, this change could indicate that we should replace 302 redirects applied to temporarily moved URLs with 307 redirects, reserving the 302 response code for hiccups and redirects done by the Web server itself –without an explicit redirect statement in the server’s configuration (httpd.conf or .htaccess)–, for example in response to requests of maliciously shortened URIs (of course a 301 is the right answer in this case, but some servers use the “wrong” 302 response code by default to err on the side of caution until the Webmaster sets proper canonicalization redirects returning 301 response codes).

Strictly interpreted, this change tells us that the 302 response code must not be applied to moved URLs, regardless whether the move is really a temporary replacement (during maintenance windows, to point to mirrors of pages on overcrowded servers during traffic spikes, …) or even a permanent forwarding request where somebody didn’t bother sending a status line to qualify the location directive. As for maintenance, better use 503 “Service Unavailable”!

Another important change is the addition of the non-cachable instruction in HTTP/1.1. Because the HTTP/1.0 standard didn’t explicitely state that the URL given in location must not be cached, some user agents did so, and the few Web developers actually reading the specs thought they’re allowed to simplify their various redirects (302′ing everything), because in the eyes of a developer nothing is really there to stay (SEOs, who handle URLs as assets, often don’t understand this philosophy, thus sadly act confrontational instead of educational).

Having said all that, is there still a valid use case for 302 redirects? Well, since 307 is an invalid response code with HTTP/1.0 requests, and crawlers still perform those, there’s no alternative to 302. Is that so? Not really, at least not when you’re dealing with overcautious search engine crawlers. Most HTTP/1.0 requests from search engines are faked, that means the crawler understands everything HTTP/1.1 but sends an HTTP/1.0 request header just in case the server runs since the Internet’s stone age without any upgrades. Yahoo’s Slurp for example does faked HTTP/1.0 requests in general, whilst you can trust Ms. Googlebot’s request headers. If Google’s crawler does an HTTP/1.0 request, that’s either testing the capabilities of a newly discovered server, or something went awfully wrong, usually on your side.

Google’s as well as Yahoo’s crawlers understand both the 302 and the 307 redirect (there’s no official statement from Yahoo though). But there are other Web robots out there (like link checkers of directories or similar bots send out by site owners to automatically remove invalid as well as redirecting links), some of them consisting of legacy code. Not to speak of ancient browsers in combination with Web servers which don’t add the hyperlink piece to 307 responses. So if you want to do everything the right way, you send 302 responses to HTTP/1.0 requestors –except when the user agent and the IP address identify a major search engine’s crawler–, and 307 responses to everything else –except when the HTTP/1.1 user agent lacks understanding of 307 response codes–. Ok, ok, ok … you’ll stick with the outdated 302 thingy. At least you won’t change old code just to make it more complex than necessary. With newish applications, which rely on state of the art technologies like AJAX anyway, you can quite safely assume that the user agents understand the 307 response, hence go for it and bury the wrecked 302, but submit only non-redirecting URLs to other places.

Here is how Google handles 302 redirects:

[Source …] you shouldn’t use it to tell the Googlebot that a page or site has moved because Googlebot will continue to crawl and index the original location.

Well, that’s not much info, and obviously a false statement. Actually, Google continues to crawl the redirecting URL, then indexes the source URL with the target’s content from redirects within a domain or subdomain only –but not always–, and mostly indexes the target URL and its content when a 302 redirect leaves the domain of the redirecting URL –if not any other URL redirecting to the same location or serving the same content looks prettier–. In most cases Google indexes the content served by the target URL, but in some cases all URL candidates involved in a redirect lose this game in favor of another URL Google has discovered on the target server (usually a short and pithy URL).

Like with 301 redirects, Yahoo “breaks the rules” with 302 redirects too:

[Source …] When one web page redirects to another web page, Yahoo! Web Search sometimes indexes the page content under the URL of the entry or “source” page, and sometimes index it under the URL of the final, destination, or “target” page. […]

When a page in one domain redirects to a page in another domain, Yahoo! records the “target” URL. […]

When a page in a domain presents a temporary redirect to another page in the same domain, Yahoo! indexes the “source” URL.

Yahoo! Web Search indexes URLs that redirect according to the general guidelines outlined above with the exception of special cases that might be read and indexed differently. […]

One of these cases where Yahoo handles redirects “differently” (meaning according to the HTTP standards) is a soft redirect from the root index page to a deep page. Like with a 301 redirect, Yahoo indexes the home page URL with the contents served by the redirect’s target.

You see that there are not that much advantages of 302 redirects pointing to other servers. Those redirects are most likely understood as somwhat permanent redirects, what means that the engines most probably crawl the redirecting URLs in a lower crawl frequency than 307 redirects.

If you have URLs which change their contents quite frequently by redirecting to different resources (from the same domain or on another server), and you want search engines to index and rank those timely contents, then consider the hassles of IP/UA based response codes depending on the protocol version. Also, feed those URLs with as much links as you can, and list them in an XML sitemap with a high priority value, a last modified timestamp like request timestamp minus a few seconds, and an “always”, “hourly” or “daily” change frequency tag. Do that even when you for whatever reasons have no XML-sitemap at all. There’s no better procedure to pass such special instructions to crawlers, even an XML sitemap listing only the ever changing URLs should do the trick.

If you promote your top level page but pull the contents from deep pages or scripts, then a 302 meant as 307 from the root to the output device is a common way to avoid duplicate content issues while serving contents depending on other request signals than the URI alone (cookies, geo targeting, referrer analysis, …). However, that’s a case where you can avoid the redirect. Duplicating one deep page’s content on root level is a non-issue, a superfluous redirect is an issue with regard to performance at least, and it sometimes slows down crawling and indexing. When you output different contents depending on user specific parameters, treating crawlers as users is easy to accomplish. I’d just make the root index default document a script outputting the former redirect’s target. That’s a simple solution without redirecting anyone (which sometimes directly feeds the top level URL with PageRank from user links to their individual “home pages”).

307 - Temporary Redirect

307 Temporary RedirectWell, since the 307 redirect is the 302’s official successor, I’ve told you nearly everything about it in the 302 section. Here is the HTTP/1.1 definition:

307 Temporary Redirect

The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.

The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s), since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI.

The 307 redirect was introduced with HTTP/1.1, hence some user agents doing HTTP/1.0 requests do not understand it. Some! Actually, many user agents fake the protocol version in order to avoid conflicts with older Web servers. Search engines like Yahoo for example perform faked HTTP/1.0 requests in general, although their crawlers do talk HTTP/1.1. If you make use of the feedburner plugin to redirect your WordPress feeds to feedburner.com/yourfeed, respectively feeds.yourdomain.com resolving to feedburner.com/yourfeed, you’ll notice that Yahoo bots do follow 307 redirects, although Yahoo’s official documentation does not even mention the 307 response code.

Google states how they handle 307 redirects as follows:

[Source …] The server is currently responding to the request with a page from a different location, but the requestor should continue to use the original location for future requests. This code is similar to a 301 in that for a GET or HEAD request, it automatically forwards the requestor to a different location, but you shouldn’t use it to tell the Googlebot that a page or site has moved because Googlebot will continue to crawl and index the original location.

Well, a summary of the HTTP standard plus a quote from the 302 page is not exactly considered a comprehensive help topic. However, checked with the feedburner example, Google understands 307s as well.

A 307 should be used when a particular URL for whatever reason must point to an external resource. When you for example burn your feeds, redirecting your blog software’s feed URLs with a 307 response code to “your” feed at feedburner.com or another service is the way to go. In this case it plays no role that many HTTP/1.0 user agents don’t know shit about the 307 response code, because all software dealing with RSS feeds can understand and handle HTTP/1.1 response codes, or at least can interpret the class 3xx and request the feed from the URI provided in the header’s location field. More important, because with a 307 redirect each revisit has to start at the redirecting URL to fetch the destination URI, you can move your burned feed to another service, or serve it yourself, whenever you choose to do so, without dealing with longtime cache issues.

302 temporary redirects might result in cached addresses from the location’s URL due to an unprecise specification in the HTTP/1.0 protocol, but that shouldn’t happen with HTTP/1.1 response codes which, in the 3xx class, all clearly tell what’s cachable and what not.

When your site’s logs show a tiny amount of actual HTTP/1.0 requests (eliminate crawlers of major search engines for this report), you really should do 307 redirects instead of wrecked 302s. Of course, avoiding redirects where possible is always the better choice, and don’t apply 307 redirects to moved URLs.

Recap

301-302-307-redirect-recapHere are the bold sentences again. Hop to the sections via the table of contents.

  • Avoid redirects where you can. URLs, especially linked URLs, are assets. Often you can include other contents instead of performing a redirect to another resource. Also, there are hyperlinks.
  • Search engines process HTTP redirects (301, 302 and 307) as well as meta refreshes. If you can, always go for the cleaner server sided redirect.
  • Always redirect to the final destination to avoid useless hops which kill your search engine traffic. With each and every revamp that comes with URL changes check for incoming redirects and make sure that you eliminate unnecessary hops.
  • You must maintain your redirects forever, and you must not remove (permanent) redirects. Document all redirects, especially when you do redirects both in the server configuration as well as in scripts.
  • Check your logs for redirects done by the Web server itself and unusual 404 errors. Vicious Web services like Yahoo or MSN screw your URLs to get you in duplicate content troubles with Google.
  • Don’t track links with redirecting scripts. Avoid redirect scripts in favor of link attributes. Don’t hoard PageRank by routing outgoing links via an uncrawlable redirect script, don’t buy too much of the search engine FUD, and don’t implement crappy advice from Webmaster hangouts.
  • Clever redirects are your friend when you handle incoming and outgoing affiliate links. Smart IP/UA based URL cloaking with permanent redirects makes you independent from search engine canonicalization routines which can fail, and improves your overall search engine visibility.
  • Do not output anything before an HTTP redirect, and terminate the script after the last header statement.
  • For each server sided redirect, send an HTTP status line with a well choosen response code, and an absolute (fully qualified) URL in the location field. Consider tagging the redirecting script in the header (X-Redirect-Src).
  • Put any redirect logic at the very top of your scripts. Encapsulate redirect routines. Performance is not everything, transparency is important when the shit hits the fan.
  • Test all your redirects with server header checkers for the right response code and a working location. If you forget an HTTP status line, you get a 302 redirect regarless your intention.
  • With canonicalization redirects use not equal conditions to cover everything. Most .htaccess code posted on Webmaster boards, supposed to fix for example www vs. non-www issues, is unusable. If you reply “thanks” to such a post with your URL in the signature, you invite saboteurs to make use of the exploits.
  • Use only 301 redirects to handle permanently moved URLs and canonicalization. Use 301 redirects only for persistent decisions. In other words, don’t blindly 301 everything.
  • Don’t redirect too many URLs simultaneous, move large amounts of pages in smaller chunks.
  • 99% of all 302 redirects are either syntax errors or semantically crap, but there are still some use cases for search engine friendly 302 redirects. “Moved URLs” is not on that list.
  • The 307 redirect can replace most wrecked 302 redirects, at least in current environments.
  • Search engines do not handle redirects according to the HTTP specs any more. At least not when a redirect points to an external resource.

I’ve asked Google in their popular picks campaign for a comprehensive write-up on redirects (what is part of the ongoing help system revamp anyway, but I’m either greedy or not patient enough). If my question gets picked, I’ll update this post.

Did I forget anything else? If so, please submit a comment. ;)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

A Monday’s topic conglomerate

I’m writing a longish post on how to never fuck up redirects again, stay tuned. If you want me to babble about a particular topic related to 301/302/307 redirects, please submit it in the comments or drop me a message. Although I’m busy with this article and other not so important tasks like real work, I’d like to mention a few things.

Lucia made a plugin from my WordPress URL canonicalization bugfix. Neat. :)

Marty wants all of us to link to the NYC Search Marketers’ Party During SMX to Beat Lymphoma on October, 15, 2007. If you’re in NY next week, then please donate $40 at the door to help the Leukemia and Lymphoma Society fight cancer, and enjoy three hours of open bar partying with fellow social internet marketers. I wasn’t tagged yet in this meme, but I spotted Marty’s call for action at Sphinn and added the link to my sidebar. I’m tagging John, John, and John.

David tagged me with a Google Sandbox meme asking for a wish. Well, I wish my darn rheumatism would allow me to play beach volleyball in the canonical sandbox over at the Googleplex. Because that’s not likely to happen anytime soon, I’d be happy with a GWC tool reporting incoming anchor text by landing page, inbound links ordered by importance, not commonness. Well, with this meme I can’t tag a Googler again, so I forward the question to Ralph, Mark and Richard.

After a painful long abstinence, tonight I’ve got a babysitter, so I can grab a few pints of Guinness in my favorite pub. Cheers.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28  Next Page »