Fraud from the desk of Sebastian Foss

Frequently I receive emails from very angry people complaining about various “SEO tools” and “Money Making Software” delivered from the desk of Sebastian Foss (AdBlaster, Instant Booster, eZine blaster, Blog Blaster, Feed Blaster, Newsgroup Blaster, eBay Cash Machine, Doorway Page Generator, Google Cash Machine, and countless more scams), which got their sites banned or which just didn’t work as promised, asking for a refund and demanding compensation. I’m sick of replying to all these emails to set the records straight, so here is the guy’s address:

e-trinity Internetmarketing Ltd.
Sebastian Foss
Böhler Str.14
Lindlar, 51789, North Rhine-Westphalia
Germany [map]
Phone: +49 2266 478 230
Fax: +49 2266 478 197
email: sebastian@etrinity-mail.com

I suffer from his fraudulent and spammy activities too. I find his URLs in my referrer stats, I receive his email spam from the desk of Sebastian Foss, and people get mad on me because they assume I’m him just because I blog about SEO and Internet marketing.

Here is the last email spam I got from Sebastian Foss1) at promote-biz.net:And here’s the attached HTML file:A smart investigator can should be capable to assign this URL on promote-biz.net to e-trinity Internetmarketing Ltd., Sebastian Foss’ company in Lindlar, Germany.

If you’re sick of spam and scams from the desk of Sebastian Foss too, then turn him in. Last time I looked, sending out email spam is a crime in Germany. In case the spam report form below (courtesy of the german cops) doesn’t work, here you go: Police Lindlar, Germany.
SPAM REPORT (yellow background = mandatory)

Your coordinates:






What to report?







Provide internet address (URL), IP address, channel, email-ID (email header), and other information useful to track down the issue:



Details (mandatory!)

Witnesses (if any, provide names and addresses)

Perpetrator and site of crime:






Here’s a tiny sample of domains related to or operated by Sebastian Foss, according to Rip Off Report “[one of] the biggest scam artist[s] on the Internet”:
10-thousand-dollars.biz 101-website-traffic.com 2click.com auction-machine.com automatedriches.com automatic-mailer.com blog-blast.com blog-blaster.com cashcreation.com clickedcash.com dollarbuddy.biz etrinity.com feed-blast.com free-traffic-handbook.com hit-booster.com hitworkz.com income-builder.com income-machine.com incomeuniversity.com instant-booster.com megapromoter.com megawealthpackage.com minuteprofits.com money-license.com moneybank.com plugin-income.com press-blast.com promote-biz.net promotionpalace.com sebastianfoss.com seo-secret.com submit-it-easy.com …

It doesn’t hurt to link to this post with “Sebastian Foss” in the anchor text ;)

Update: I received a threat from Sebastian Foss, so I’ve edited this post (look for original text followed by changes). I’m not 100% certain which instance of “Sebastian Foss” sends out the email spam, but all known instances of Sebastian Foss are obviously spammers. More Information on the spammer Sebastian Foss and his clones respectively multiple/virtual personalities.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Help Google revealing the secret sauce!

Do you remember this Do’s & Don’ts page?

Google Information for Webmasters
Webmaster Dos and Don’ts
Do:

  • Create a site with content and design that are straightforward, appropriate and relevant for visitors to your site.
  • Feel free to exchange links with other sites that are compatible with your site’s content and users’ interests.
  • Be very careful about allowing an individual consultant or company to ‘optimize’ your web site. Chances are they will engage in some of our "Don’ts" and end up hurting your site.
  • Consider submitting your sites to our partner directories Yahoo! and DMOZ.

Don’t:

  • Cloak.
  • Write text or create links that can be seen by search engines but not by visitors to your site.
  • Participate in link exchanges for the sole purpose of increasing your ranking in search engines.
  • Send automated queries to Google in an attempt to monitor your site’s ranking.
  • Use programs that generate lots of generic doorway pages.

http://www.google.com/webmasters/dos.html five years ago (restored)

That’s Google’s Webmaster guidelines as per 2002, when the Webmaster’s section covered all topics on a dozen or so pages. In the meantime it was translated into many languages, and grew considerably. Todays Webmaster Help Center is an authoritative resource for experienced search geeks able to gather the tidbits various Googlers spread on the Web too.

That’s going to change. Ríona MacNamara from Google’s Webmaster Central team in Kirkland asks for ideas on How to revamp Google’s Webmaster Help Center:

We’re planning to restructure the Webmaster Tools Help Center to improve the way we organize and present help content. We want to make sure that our content is technically accurate, relevant, and up to date, and that it’s easy to navigate and find exactly what you’re looking for. Is the content broad enough in scope? Deep enough in detail? Does it have the right mix of instructional and conceptual info? […] Is the Help Center — well, helpful?

I hope that Google is willing to evolve the Webmasters Help Center to become a useful resource for spare time Webmasters, site owners, publishers, bloggers and other non-geeks, along with in-depth information addressing search geeks. Assuming in its current shape it’s meant to help out non-search-geeks, I must state that it hosts some of the worst FAQ items ever. The contents are certainly helpful if the reader has a great deal of Google specific knowledge, experience in reading Google-ish text, and knows what to read with a grain of salt because Google cannot tell the way the cookie crumbles to protect their secret sauce. Well, instead of reading rants, or bitching yourself, why not add your 0.02?

Click here to tell Google what you want and expect.

Please don’t get fooled by “Tools” in the thread title. The tools are nicely explained, what we want is the secret sauce dumped into the general help system ;)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

No search, more fun: Netscape spamming Google

Google dislikes crawlable SERPs. But Google still indexes huge chunks of SERPs, and to make it worse, these disliked URLs sometimes rank above other useless webspam from Amazon, Ebay, and cohorts on the very first search result page.

For example Netscape is still flooding Google’s search index with crap as per the quality guidelines, which clearly state:

Use robots.txt to prevent crawling of search results pages […] that don’t add much value for users coming from search engines.

Netscape.com lacks a robots.txt, but how many patterns does it need to identify these pages as SERPs? Next search.netscape.com has a robots.txt, but it lacks a Disallow: / directive, respectively Disallow’s of all their scripts generating search results.

Is it that simple to get gazillions of useless autogenerated pages ranking at Google? Indeed. Following the Netscape precedent every assclown out there can buy a SE-script, can crawl the Web for a bunch of niche keywords, and will earn free Google traffic just because he has “forgotten” to upload a proper robots.txt file and Google isn’t capable of detecting SERPs. I mean when they don’t run a few tests with Netscape-SERPs, where’s the point of an unenforced no-crawlable-SERPs policy?

I just found another interesting snippet in Google’s quality guidelines:

If a site doesn’t meet our quality guidelines, it may be blocked from the index.

I certainly will not miss 1,360,000 URLs from a spamming site ;)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Follow-up: Erol’s patch fixing Google troubles

Erol developers test their first Google-patch for sites hosted on UNIX boxes. You can preview it here: x55.html. When you request the page with a search engine crawler identifier as user-agent name, the JavaScript code redirecting to the frameset erol.html#55×0&& gets replaced with a HTML comment explaining why human visitors are treated different from search engine spiders. The anatomy of this patch is described here, your feedback is welcome.

Erol told me they will be running tests on this site over the coming weeks, as they always do before going live with an update. So stay tuned for the release. When things run smoothly on UNIX hosts, a patch for Windows environments shall follow. On IIS the implementation is a bit trickier, because it needs changes of the server configuration. I’ll keep you updated.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

When your referrer stats turn into a porn TGP

When you wonder why your top referrers are porn galleries, make-you-rich-in-a-second scams and other pages which don’t carry your link but try to sell you something, read further.

Referrer spamming is done by bots requesting pages from your site, leaving a bogus HTTP_REFERER. These spam bots come from various IPs, change their user agents on the fly, and use other sneaky techniques to slip thru spam protection. Some of them are somewhat clever by adjusting the number of bogus requests to your site by your Alexa stats to ensure their “visits” do appear on limited realtime referrer lists and other stats by referrer. Some of them even suck the whole pages from your server, and a few even follow redirects.

So what can you do? Not much. You can’t really get rid of these log entries, because the logs are written before your spam protection handles those requests. But you can reduce the waste of bandwidth and server resources. If you redirect these requests, your server sends only a header, but not the contents. Here is a way to accomplish that:

First of all, extract the bogus referrers from your logs or stats pages, and save them in a plain text file:
Change this to a list of domains, truncating subdomains like “www” or “galleries”, and add .htaccess code:

SetEnvIf Referer \.collegefuckfest\.com GoFuckYourself=1
SetEnvIf Referer \.asstraffic\.com GoFuckYourself=1
SetEnvIf Referer \.allinternal\.com GoFuckYourself=1
SetEnvIf Referer \.mature-lessons\.com GoFuckYourself=1
SetEnvIf Referer \.wildpass\.com GoFuckYourself=1
SetEnvIf Referer \.promote-biz\.net GoFuckYourself=1

This code will create an environment variable “GoFuckYourself” with the value “1″. Following statements can now work with these marked requests:

RewriteCond %{ENV:GoFuckYourself} 1 [NC]
RewriteRule /* %{HTTP_REFERER} [R=301,L]

This redirects the request to its referrer, so if the bogus bot follows redirects, it will request a page from the spammer’s domain. Of course you can redirect to a static URL too:
RewriteRule /* http://www.example.com/gofuckyourself [R=301,L]

You could also use the environment variable in deny statements
order allow,deny
allow from all
deny from env=GoFuckYourself

but that will serve a complete page, and may produce an infinite loop. Deny as well as the similar RewriteRule .* - [F] enforce a 403-Forbidden. Then if you’ve an ErrorDocument 403 /getthefuckouttahere.html directive, the request of the error page runs into the 403 itself - this process calls itself over and over until it gets terminated after 20 or so loops.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

German spammers banning all domains out there

If you receive an email in german language from Google’s Search Quality team (donotreply@gmail.com) telling your site was banned by Google for 30 days please don’t worry. That’s faked. Legit (similar phrased) emails must come from a google.com email address. If the hoax-email comes with an attachment, don’t save or open the attached file (zipped google_webmastertools.exe)!

Here is the email:
Entfernung Ihrer Webseite [domain] aus dem Google Index
The email looks pretty authentic, its style and wording are somewhat Google-ish. I speak German, hence I’m sure that gazillions of innocent Webmasters and site owners buy it and panic. Unfortunately most filters let the zipped attachment (google_webmastertools.exe) pass thru. I didn’t open it myself and I bet it’s not a bright idea to try it.

Google told me that Stefanie from the real Search Quality team over in Dublin will soon post a warning on the german blog.

Here is an original penalty warning in german language:
Entfernung Ihrer Webseite aus dem Google IndexThese emails are sent from donotreply@google.com without attachments.

Update 05/10/2007: Here is Google’s official statement (in german language) and the english version by Vanessa. The attached .exe is a joke, it executes cmd.exe c\:clear complete harddisc (Hoax.BAT.Small.a).

Update 05/11/2007: Because these emails are easy to mistake for authentic ones from the Search Quality team, Google temporarily discontinued sending them as they work on ways to provide more secure communication mechanisms. This update reads as if Google has stopped to send out penalty notification emails in all languages: “… as we’ve temporarily stopped sending emails about guidelines violations, you can safely assume that any email you receive isn’t from us. Note that we do provide information about some violations in webmaster tools.”.

Update 06/19/2007: German forums and blogs report another flood of these faked emails, and this post got tons of visits from searches for quotes from the email quoted above. Calm down, don’t panic: Google still doesn’t send out penalty notifications via email (in Deutsch). So please ignore the spam and refer to the diagnostics tab in your Webmaster Central account when you assume a penalty.

Update 07/18/2007: Google released the message center where site owners can poll for penalty notifications. They are still working on a safe solution for emails. Probably ‘-950/-30/-n penalties’ won’t get announced any time soon.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Brittany-Spear-Nude-mesothelioma-ringtones

John Brittany Spear, blogging nude about mesothelioma and ringtones all day long, asked me to introduce the GoogleWhack Brittany-Spear-Nude-mesothelioma-ringtones.

Well, that gets me somewhat nervous, coz Brittany Spear sounds more like a pretty handsome gal. Anybody got a nude pic to download? Actually, I can’t imagine a blog titled John “Brittany Spear” talking about mesothelioma ringtones. By the way, I know what a ringtone is, but how the heck can a cell phone sound like a tumor? Is there a place to download free mesothelioma ringtones for my mobile phone? Or is that a laywer’s trick to get me sick on Brittany Spear, whoever that may be, and however she may look undressed? Not that I dislike nude Brittanies, in fact I do love a naked Brittany for breakfast, but I’m not sure I’d download a nudist suffering from mesothelioma at an all for free ringtone site.

Also, what will the allmighty Google think about my keyword stuffing when it comes to ringtones related to mesothelioma discussed by a nude Brittany Spear selling PR8 links for as low as $299.00? Is that fun or spam? Go figure …

Actually, I deserve the pain caused by exposure to asbestos fibres, particulary those of crocidolite, the fibres of which are thin and straight and penetrate to the deep layers of the lung.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Yahoo! search going to torture Webmasters

According to Danny Yahoo! supports a multi-class nonsense called robots-nocontent tag. CRAP ALERT!

Can you senseless and cruel folks at Yahoo!-search imagine how many of my clients who’d like to use that feature have copied and pasted their pages? Do you’ve a clue how many sites out there don’t make use of SSI, PHP or ASP includes, and how many sites never heard of dynamic content delivery, respectively how many sites can’t use proper content delivery techniques because they’ve to deal with legacy systems and ancient business processes? Did you ask how common templated Web design is, and I mean the weird static variant, where a new page gets build from a randomly selected source page saved as new-page.html?

It’s great that you came out with a bastardized copy of Google’s somewhat hapless (in the sense of cluttering structured code) section targeting, because we dreadfully need that functionality across all engines. And I admit that your approach is a little better than AdSense section targeting because you don’t mark payload by paydirt in comments. But why the heck did you design it that crappy? The unthoughtful draft of a microformat from what you’ve “stolen” that unfortunate idea didn’t become a standard for very good reasons. Because it’s crap. Assigning multiple class names to markup elements for the sole purpose of setting crawler directives is as crappy as inline style assignments.

Well, due to my zero-bullshit tolerance I’m somewhat upset, so I repeat: Yahoo’s robots-nocontent class name is crap by design. Don’t use it, boycott it, because if you make use of it you’ll change gazillions of files for each and every proprietary syntax supported by a single search engine in the future. When the united search geeks can agree on flawed standards like rel-nofollow, they should be able to talk about a sensible evolvement of robots.txt.

There’s a way easier solution, which doesn’t require editing tons of source files, that is standardizing CSS-like syntax to assign crawler directives to existing classes and DOM-IDs. For example extent robots.txt syntax like:

A.advertising { rel: nofollow; } /* devalue aff links */

DIV.hMenu, TD#bNav { content:noindex; rel:nofollow; } /* make site wide links unsearchable */

Unsupported robots.txt syntax doesn’t harm, proprietary attempts do harm!

Dear search engines, get together and define something useful, before each of you comes out with different half-baked workarounds like section targeting or robots-nocontent class values. Thanks!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google hunts paid links and reciprocal linkage

Matt Cutts and Adam Lasnik have clarified Google’s take on paid links and overdone reciprocal linkage. Some of their statements are old news, but it surely helps to have a comprehensive round-up in the context of the current debate on paid links.

So what –in short– does Google consider linkspam:
Artificial link schemes, paid links and uncondomized affiliate links, overdone reciprocal linkage and interlinking.

All sorts of link schemes designed to increase a site’s ranking or PageRank. Link scheme means for example mass exchange of links pages, repeated chunks of links per site, fishy footer links, triangular PageRank boosting, 27-way-linkage where in the end only the initiator earns a few inbounds because the participants are confused, and “genial” stuff like that. Google’s pretty good at identifying link farming, and bans or penalizes accordingly. That’s old news, but such techniques are still used, widely.

Advice: don’t participate, Google will catch you eventually.

Paid links, if detected or reported, get devalued. That is, they don’t help the link destination’s search engine rankings, and in some cases the source will lose its ability to pass reputation via links. Google does this more or less silently since 2003 at least, probably longer, but until today there was no precise definition of risky paid links.

That’s going to change. Adam Lasnik, commenting Eric Enge’s “It seems to me that one of the more challenging aspects of all of this is that people have gotten really good at buying a link that show no indication that they are purchased.”

Yes and no, actually. One of the things I think Matt has commented about in his blog; it’s what we joking refer to as famous last words, which is “well, I have come up with a way to buy links that is completely undetectable”.

As people have pointed out, Google buys advertising, and a lot of other great sites engage in both the buying and selling of advertising. There is no problem with that whatsoever. The problem is that we’ve seen quite a bit of buying and selling for the very clear purpose of transferring PageRank. Some times we see people out there saying “hey, I’ve got a PR8 site” and, “this will give you some great Google boost, and I am selling it for just three hundred a month”. Well, that’s blunt, and that’s clearly in violation of the “do not engage in linking schemes that are not permitted within the webmaster guidelines”.

Two, taking a step back, our goal is not to catch one hundred percent of paid links [emphasis mine]. It’s to try to address the egregious behavior of buying and selling the links that focus on the passing of PageRank. That type of behavior is a lot more readily identifiable then I think people give us credit for.

So it seems Google’s just after PageRank selling. Adam’s following comments on the use and abuse of rel-nofollow emphasizes this interpretation:

I understand there has been some confusion on that, both in terms of how it [rel=nofollow] works or why it should be used. We want links to be treated and used primarily as votes for a site, or to say I think this is an interesting site, and good site. The buying and selling of links without the use of Nofollow, or JavaScript links, or redirects has unfortunately harmed that goal. We realize we cannot turn the web back to when it was completely noncommercial and we don’t want to do that [emphasis mine]. Because, obviously as Google, we firmly believe that commerce has an important role on the Internet. But, we want to bring a bit of authenticity back to the linking structure of the web. […] our interest isn’t in finding and taking care of a hundred percent of links that may or may not pass PageRank. But, as you point out relevance is definitely important and useful, and if you previously bought or sold a link without Nofollow, this is not the end of the world. We are looking for larger and more significant patterns [emphasis mine].

Don’t miss out on Eric Enge’s complete interview with Adam Lasnik, it’s really worth bookmarking for future references!

Matt Cutts has updated (May 12th, 2007) an older and well linked post on paid links. It also covers thoughts on the value of directory links. Here are a few quotes, but don’t miss out on Matt’s post:

… we’re open to semi-automatic approaches to ignore paid links, which could include the best of algorithmic and manual approaches.

Q: Now when you say “paid links”, what exactly do you mean by that? Do you view all paid links as potential violations of Google’s quality guidelines?
A: Good question. As someone working on quality and relevance at Google, my bottom-line concern is clean and relevant search results on Google. As such, I care about paid links that flow PageRank and attempt to game Google’s rankings. I’m not worried about links that are paid but don’t affect search engines. So when I say “paid links” it’s pretty safe to add in your head “paid links that flow PageRank and attempt to game Google’s rankings.”

Q: This is all well and fine, but I decide what to do on my site. I can do anything I want on it, including selling links.
A: You’re 100% right; you can do absolutely anything you want on your site. But in the same way, I believe Google has the right to do whatever we think is best (in our index, algorithms, or scoring) to return relevant results.

Q: Hey, as long as we’re talking about directories, can you talk about the role of directories, some of whom charge for a reviewer to evaluate them?
A: I’ll try to give a few rules of thumb to think about when looking at a directory. When considering submitting to a directory, I’d ask questions like:
- Does the directory reject URLs? If every URL passes a review, the directory gets closer to just a list of links or a free-for-all link site.
- What is the quality of urls in the directory? Suppose a site rejects 25% of submissions, but the urls that are accepted/listed are still quite low-quality or spammy. That doesn’t speak well to the quality of the directory.
- If there is a fee, what’s the purpose of the fee? For a high-quality directory, the fee is primarily for the time/effort for someone to do a genuine evaluation of a url or site.
Those are a few factors I’d consider. If you put on your user hat and ask “Does this seem like a high-quality directory to me?” you can usually get a pretty good sense as well, or ask a few friends for their take on a particular directory.

To get a better idea on how Google’s search quality team chases paid links, read Brian White’s post Paid Link Schemes Inside Original Content.

Advice: either nofollow paid links, or don’t get caught. If you buy links, pay only for the traffic, because with or without link condom there’s no search engine love involved.

Affiliate links are seen as kinda subset of paid links. Google can identify most (unmasked) affiliate links. Frankly, there’s no advantage in passing link love to sponsors.

Advice: nofollow.

Reciprocal links without much doubt nullify each other. Overdone reciprocal linkage may even cause penalties, that is the reciprocal links area of a site gets qualified as link farm, for possible consequences scroll up a bit. Reciprocal links are natural links, and Google honors them if the link profile of a site or network does not consist of a unnnatural high number of reciprocal or triangular link exchanges. It may be that natural reciprocal links pass (at least a portion of) PageRank, but no (or less than one-way links) revelancy via anchor text and trust or other link reputation.

Matt Cutts discussing “Google Hell”:

Reciprocal links by themselves aren’t automatically bad, but we’ve communicated before that there is such a thing as excessive reciprocal linking. […] As Google changes algorithms over time, excessive reciprocal links will probably carry less weight. That could also account for a site having more pages in supplemental results if excessive reciprocal links (or other link-building techniques) begin to be counted less. As I said in January: “The approach I’d recommend in that case is to use solid white-hat SEO to get high-quality links (e.g. editorially given by other sites on the basis of merit).”

Advice: It’s safe to consider reciprocal links somewhat helpful, but don’t actively chase for reciprocal links.

Interlinking all sites in a network can be counterproductive, but selfish cross-linking is not penalized in general. There’s no “interlinking penalty” when these links make sound business sense, even when the interlinked sites aren’t topically related. Interlinking sites handling each and every yellow page category on the other hand may be considered overdone. In some industries like adult entertainment, where it’s hard to gain natural links, many webmasters try to boost their rankings with links from other (unrelated) sites they own or control. Operating hundreds or thousands of interlinked travel sites spread on many domains and subdomains is risky too. In the best case such linking patterns may be just ignored by Google, that is they’ve no or very low impact on rankings at all, but it’s easy to convert a honest network into a link farm by mistake.

Advice: Carefully interlink your own sites in smaller networks, but partition these links by theme or branch in huge clusters. Consider consolidating closely related sites.

So what does all that mean for Webmasters?

Some might argue “if it ain’t broke don’t fix it”, in other words “why should I revamp my linkage when I rank fine?”. Well, rules like “any attempt to improve on a system that already works is pointless and may even be detrimental” are pointless and detrimental in a context where everything changes daily. Especially, when the tiny link-systems designed to fool another system, passively interact with that huge system (the search engine polls linkage data for all kinds of analyses). In that case the large system can change the laws of the game at any time to outsmart all the tiny cheats. So just because Google didn’t discover all link schemes or shabby reciprocal link cycles out there, that does not mean the participants are safe forever. Nothing’s set in stone, not even rankings, so better revise your ancient sins.

Bear in mind that Google maintains a database containing all links in the known universe back to 1998 or so, and that a current penalty may be the result of a historical analysis of a site’s link attitude. So when a site is squeaky clean today but doesn’t rank adequately, consider a reinclusion request if you’ve cheated in the past.

Before you think of penalties as the cause of downranked or even vanished pages, analyze your inbound links that might have started counting for less. Pull all your inbound links from Site Explorer or Webmaster Central, then remove questionable sources from the list:

  • Paid links and affiliate links where you 301-redirect all landing pages with affiliate IDs in the query string to a canonical landing page,
  • Links from fishy directories, links lists, FFAs, top rank lists, DMOZ-clones and stuff like that,
  • Links from URLs which may be considered search results,
  • Links from sites you control or which live off your contents,
  • Links from sites engaged in reciprocal link swaps with your sites,
  • Links from sites which link out to too many questionable pages in link directories or where users can insert links without editorial control,
  • Links from shabby sites regardless their toolbar PageRank,
  • Links from links pages which don’t provide editorial contents,
  • Links from blog comments, forum signatures, guestbooks and other places where you can easily drop URLs,
  • Nofollow’ed links and links routed via uncrawlable redirect scripts,

Judge by content quality, traffic figures if available, and user friendliness, not by toolbar PageRank. Just because a link appears in reverse citation results, that does not mean it carries any weight.

Look at the shrinked list of inbound links and ask yourself where on the SERPs a search engine should rank your stuff based on these remaining votes. Frustrated? Learn the fine art of link building from an expert in the field.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Categorizing posts with blogger (rant)

Google knows everything about AJAX. Why the heck can’t I assign categories to old posts without hassles? “Edit posts - change number of listed posts - scoll down - edit - scoll down - choose/enter categories - publish - repeat” is just 7 full page reloads/actions too much. On a slow DSL connection this archaic procedure drives me nuts.

Dear readers, when you click on “Labels” most probably you won’t find related posts :( I’m adding categories when I update an old post, but UI flaws hinder me to categorize the whole archive. Sorry.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29  Next Page »