5 Reasons why I blog

So since Matt Cutts tagged by Vanessa Fox cat-tagged me 5 times ;) I add my piece.

    Napping cats don't listen
  1. Well, I’ve started this blog because every dog and his grandpa blogs, but the actual reason was, that I couldn’t convince my beloved old cat listening my rants any more. Sadly my old comrade died years ago in the age of 15, leaving alone a gang of two legged monsters rampage in house and garden.
  2. Since then I’ve used my blogs for kidding, bollocks, and other stuff not suitable for more or less static sites where I publish more seriously. However, I’ve scraped some wholehearted posts from the blog to put them on the consulting platform, because this site is way more popular. Vice versa I’ve blogged announced my other articles and projects here. This blog is somewhat a playground to test the waters and concurrently a speaking tube. I still find it difficult to do that with another platform, the timely character of blogging perfectly allows burying of half-baked things.
  3. Every now and then I write an open letter to Google, for example my series of pleas to revamp rel=nofollow. Perhaps a googler is listening ;)
    Also, a blog is a neat instrument to get the attention of folks who don’t seem to listen.
  4. Frankly I like to share ideas and knowledge. Blogging is the perfect platform to raise rumors or myths too. Also, writing helps me to structure my thoughts, this works even better in a foreign language.
  5. Last but not least I use my blog as reference. While providing Google user support sometimes I just drop a link, particulary as answer to repetitive questions. By the way Google’s Webmaster Forum is a nice place to chase SEO tidbits straight from the horse’s mouth.

Although I admit I’ve somewhat tag-baited my way in here, I’m tagging you:
Thu Tu
John Müller
John Honeck
Jim Boykin
Gurtie & Chris



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Four reasons to get tanked on Google’s SERPs

You know I find “My Google traffic dropped all of a sudden - I didn’t change anything - I did nothing wrong” threads fascinating. Especially posted with URLs on non-widgetized boards. Sometimes I submit an opinion, although the questioners usually don’t like my replies, but more often I just investigate the case for educational purposes.

Abstracting a fair amount of tanked sites I’d like to share a few of my head notes respectively theses as food for thoughts. I’ve tried to put these as generalized as possible, so please don’t blame me for the lack of a detailed explanation.

  1. Reviews and descriptions ordered by product category, product line, or other groupings of similar products, tend to rephrase each other semantically, that is in form and content. Be careful when it comes to money making areas like travel or real estate. Stress unique selling points, non-shared attributes or utilizations, localize properly and make sure reviews respectively descriptions don’t get spread in-full internally on crawlable pages as well as externally.
  2. Huge clusters of property/characteristic/feature lists under analogical headings, even unstructured, may raise a flag when the amount of applicable attributes is finite and values are rather similar with just few of them totally different respectively expressions of trite localization.
  3. The lack of non-commercial outgoing links on pages plastered with ads of any kind, or pages at the very buttom of the internal linking hierarchy, may raise a flag. Nofollow’ing, redirecting or iFraming affiliate/commercial links doesn’t prevent from breeding artificial page profiles. Adding unrelated internal links to the navigation doesn’t help. Adding Wikipedia links in masses doesn’t help. Providing unique textual content and linking to authorities within the content does help.
  4. Strong and steep hierarchical internal/navigational linkage without relevant crosslinks and topical between-the-levels linkage looks artificial, especially when the site in question lacks deep links. Look at the ratio of home page links vs. deep links to interior pages. Rethink the information architecture and structuring.

Take that as call for antithesis or just stuff for thoughts. And keep in mind that although there might be no recent structural/major/SEO/… on-site changes, perhaps Google just changed her judgement on the ancient stuff ranking forever, and/or has just changed the ability of existing inbound links to pass weight. Nothing’s set in stone. Not even rankings.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Link monkey business is not worth a whoop

Old news, pros move on educating the great unlinked.

A tremendous amount of businesses maintaining a Web site still swap links in masses with every dog and his fleas. Serious sites join link exchange scams to gain links from every gambling spammer out there. Unscrupulous Web designers and low-life advisors put gazillions of businesses at risk. Eventually the site owners pop up in Google’s help forum wondering why the heck they lost their rankings despite their emboldening toolbar PageRank. Told to dump all their links pages and to file a reinclusion request they may do so, but cutting one’s loss short term is not the way the cookie crumbles with Google. Consequences of listening to bad SEO advice are often layoffs or even bust.

In this context a thread titled “Do the companies need to hire a SEO to get in top position?” asks the somewhat right question but may irritate site owners even more. Their amateurish Web designer offering SEO services obviously got their site banned or at least heavily penalized by Google. Asking for help in forums they get contradictory SEO advice. Google’s take on SEO firms is more or less a plain warning. Too many scams sailing under the SEO flag and it seems there’s no such thing as reliable SEO advice for free on the net.

However, the answer to the question is truly “yes“. It’s better to see a SEO before the rankings crash out. Unfortunately, SEO is not a yellow pages category, and every clown can offer crappy SEO services. Places like SEO Consultants and honest recommendations get you the top notch SEOs, but usually the small business owner can’t afford their services. Asking fellow online businesses for their SEO partner may lead to a scammer who is still beloved because Google has not yet spotted and delisted his work. Kinda dilemma, huh?

Possible loophole: once you’ve got a recommendation for a SEO skilled Webmaster or SEO expert from somebody attending a meeting at the local chamber of commerce, post that fellow’s site to the forums and ask for signs of destructive SEO work. Should give you an indication of trustworthiness.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Supplemental-Only

Nice closing words on “my stuff went supplemental” from JohnWeb.

Applying simplified conclusions to a complex SEO question reveal 20% of the truth whilst 80% are just not worth discussing because the efforts necessary to analyze one more percent equal the 20% analysis. The alternative is working with 20% reasonable conclusions plus 80% common sense.

Unfortunately, common sense is not as common as you might think. Just count the supplemental-threads across the board, then search for words of wisdom. Sigh.

Tags: ()
Update: Read Matt’s Google Hell



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Che Guevara of Search

I just can’t step away from the keyboard … who’s this well known guy?
Che Guevara

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Follow-up on "Google penalizes Erol stores"

Background: these three posts on Google penalizing e-commerce sites.

Erol has contacted me and we will discuss the technical issues within the next days or maybe weeks or so. I understand this as a positive signal, especially because previously my impression was that Erol is not willing to listen constructive criticism, regardless Googles shot across the bow (more on that later). We agreed that before we come to the real (SEO) issues it’s a good idea to render a few points made in my previous posts more precisely. In the following I quote parts of Erol’s emails with permission:

Your blog has made for interesting reading but the first point I would like to raise with you is about the tone of your comments, not necessarily the comments themselves.

Question of personal style, point taken.

Your article entitled ‘Why eCommerce Systems Suck‘, dated March 12th, includes specific reference to EROL and your opinion of its SEO capability. Under such a generic title for an article, readers should expect to read about other shopping cart systems and any opinion you may care to share about them. In particular, the points you raise about other elements of SEO in the same article, (’Google doesn’t crawl search results’, navigation being ‘POST created results not crawlable’) are cited as examples of ways other shopping carts work badly in reference to SEO - importantly, this is NOT the way EROL stores work. Yet, because you do not include any other cart references by name or exclude EROL from these specific points, the whole article reads as if it is entirely aimed at EROL software and none others.

Indeed, that’s not fair. Navigation solely based on uncrawlable search results without crawler shortcuts or sheer POST results are definitely not issues I’ve stumbled upon while investigating penalized Erol driven online stores. Google’s problem with Erol driven stores is client sided cloaking without malicious intent. I’ve updated the post to make that clear.

Your comment in another article, ‘Beware of the narrow-minded coders‘ dated 26 March where you state: “I’ve used the case [EROL] as an example of a nice shopping cart coming with destructive SEO.” So by this I understand that your opinion is EROL is actually ‘a nice shopping cart’ but it’s SEO capabilities could be better. Yet your articles read through as EROL is generally bad all round. Your original article should surely be titled “Why eCommerce Systems Suck at SEO” and take a more rounded approach to shopping cart SEO capabilities, not merely “Why eCommerce Systems Suck”? This may seem a trivial point to you, but how it reflects overall on our product and clouds it’s capability to perform its main function (provide an online ecommerce solution) is really what concerns me.

Indeed, I meant that Erol is a nice shopping cart lacking SEO capabilities as long as not the major SEO issues get addressed asap. And I mean in the current version, which clearly violates Google’s quality guidelines. From what I’ve read in the meantime, the next version to be released in 6 months or so should eleminate the two major flaws with regard to search engine compatibility. I’ve changed the post’s title, the suggestion makes sense for me too.

I do not enjoy the Google.co.uk traffic from search terms like “Erol sucks” or “Erol is crap” because that’s simply not true. As I said before I think that Erol is a well rounded software nicely supporting the business processes its designed for, and the many store owners using Erol I’ve communicated with recently all tell me that too.

I noted with interest that your original article ‘Why eCommerce Systems Suck’ was dated 12th March. Coincidentally, this was the date Google began to re-index EROL stores following the Google update, so I presume that your article was originally written following the threads on the Google webmaster forums etc. prior to the 12th March where you had, no doubt, been answering questions for some of our customers about their de-listing during the update. You appear to add extra updates and information in your blogs but, disappointingly, you have not seen fit to include the fact that EROL stores are being re-listed in any update to your blog so, once again, the article reads as though all EROL stores have been de-listed completely, never to be seen again.

With all respect, nope. Google did not reindex Erol driven pages, Google had just lifted a “yellow card” penalty for a few sites. That is not a carte blanque but in the opposite Google’s last warning before the site in question gets the “red card”, that is a full ban lasting at least a couple of months or even longer. As said before it means absolutely nothing when Google crawls penalized sites or when a couple of pages reappear on the SERPs. Here is the official statement: “Google might also choose to give a site a ‘yellow card’ so that the site can not be found in the index for a short time. However, if a webmaster ignores this signal, then a ‘red card’ with a longer-lasting effect might follow.”
(Yellow / red cards: soccer terminology, yellow is a warning and red the sending-off.)

I found your comments about our business preferring “a few fast bucks”, suggesting we are driven by “greed” and calling our customers “victims” particularly distasteful. Especially the latter, because you infer that we have deliberately set out to create software that is not capable of performing its function and/or not capable of being listed in the search engines and that we have deliberately done this in pursuit of monetary gain at the expense of reputation and our customers. These remarks I really do find offensive and politely ask that they be removed or changed. In your article “Google deindexing Erol driven ecommerce sites” on March 23rd, you actually state that “the standard Erol content presentation is just amateurish, not caused by deceitful intent”. So which is it to be - are we deceitful, greedy, victimising capitalists, or just amateurish and without deceitful intent? I support your rights to your opinions on the technical proficiency of our product for SEO, but I certainly do not support your rights to your opinions of our company and its ethics which border on slander and, at the very least, are completely unprofessional from someone who is positioning themselves as just that - an SEO professional.

To summarise, your points of view are not the problem, but the tone and language with which they are presented and I sincerely hope you will see fit to moderate these entries.

C’mon, now you’re getting polemic;) In this post I’ve admitted to be polemic to bring my point home, and in the very first post on the topic I clearly stated that my intention was not slandering Erol. However, since you’ve agreed to an open discussion of the SEO flaws I think it’s no longer suitable to call your customers victims, so I’ve changed that. Also in my previous post I’ll insert a link near “greed” and “fast bucks” pointing to this paragraph to make it absolutely clear that I did not meant what you insinuate when I wrote:

Ignorance is no excuse […] Well, it seems to me that Erol prefers a few fast bucks over satisfied customers, thus I fear they will not tell their cutomers the truth. Actually, they simply don’t get it. However, I don’t care whether their intention to prevaricate is greed or ignorance, I really don’t know, but all the store operators suffering from Google’s penalties deserve the information.

Actually, I still stand by my provoking comments because at this time they perfectly described the impression you’ve created with your actions respectively lack of fitly activities in the public.

  1. Critical customers asking whether the loss of Google traffic might be caused by the way your software handles HTML outputs in your support forums were downtrodden and censored.
  2. Your public answers to worried customers were plain wrong, SEO-wise. Instead of “we take your hints seriously and will examine whether JavaScript redirects may cause Google penalties or not” you said that search engines do index cloaking pages just fine, that Googlebot crawling penalized sites is a good sign, and all the mess is kinda Google hiccup. At this point the truth was out long enough, so your most probably unintended disinformation has worried a number of your customers, and gave folks like me the impression that you’re not willing to undertake the necessary steps.
  3. Offering SEO services yourself as well as forum talks praising Erol’s SEO experts don’t put you in a “we just make great shopping cart software and are not responsible for search engine weaknesses” position. Frankly that’s not conceivable as responsible management of customer expectations. It’s great that your next version will dump frames and JavaScript redirects, but that’s a bit too late in the eyes of your customers, and way too late from a SEO perspective, because Google never permitted the use of JavaScript redirects and all the disadvantages of frames were public knowledge since the glory days of Altavista, Excite and Infoseek, long before Google overtook search.

To set the record straight: I don’t think and never thought that you’ve greedily or deliberately put your customers at risk in pursuit of monetary gain. You’ve just ignored Google’s guidelines and best practices of Web development too long, but –as the sub-title of my previous post hints– ignorance is no excuse.

Now that we’ve handled the public relation stuff, I’ll look into the remaining information Erol sent over hoping that I’ll be able to provide some reasonable input in the best interest of Erol’s customers.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Beware of the narrow-minded coders

or Ignorance is no excuse

Long winded story on SEO-ignorant pommy coders putting their customers at risk. Hop away if e-commerce software vs. SEO dramas don’t thrill you.

Recently I’ve answered a “Why did Google deindex my pages” question in Google’s Webmaster Forum. It turned out that the underlying shopping cart software (EROL) maintained somewhat static pages as spider fodder, which redirect human visitors to another URL serving the same contents client sided. Silly thing to do, but pretty common for shopping carts. I’ve used the case as an example of a nice shopping cart coming with destructive SEO in a post on flawed shopping carts in general.

Day by day other site owners operating Erol driven online shops popped up in the Google Groups or emailed me directly, so I realized that there is a darn widespread problem involving a very popular UK based shopping cart software responsible for Google cloaking penalties. From my contacts I knew that Erol’s software engineers and self-appointed SEO experts believe in weird SEO theories and don’t consider that their software architecture itself could be the cause of the mess. So I wrote a follow-up addressing Erol directly. Google penalizes Erol-driven e-commerce sites explaines Google’s take on cloaking and sneaky JavaScript redirects to Erol and its customers.

My initial post got linked and discussed in Erol’s support forum and kept my blog stats counter buzzy over the weekend. Accused of posting crap I showed up and posted a short summary over there:

Howdy, I’m the author of the blog post you’re discussing here: Why eCommerce systems suck

As for crap or not crap, judge yourself. This blog post was addressed to ecommerce systems in general. Erol was mentioned as an example of a nice shopping cart coming with destructive SEO. To avoid more misunderstandings and to stress the issues Google has with Erol’s JavaScript redirects, I’ve posted a follow-up: Google deindexing Erol-driven ecommerce sites.

This post contains related quotes from Matt Cutts, head of Google’s web spam team, and Google’s quality guidelines. I guess that piece should bring my point home:

If you’re keen on search engine traffic then do not deliver one page to the crawlers and another page to users. Redirecting to another URL which serves the same contents client sided gives Google an idea of intent, but honest intent is not a permission to cloak. Google says JS redirects are against the guidelines, so don’t cloak. It’s that simple.

If you’ve questions, post a comment on my blog or drop me a line. Thanks for listening

Sebastian

Next the links to this blog were edited out and Erol posted a longish but pointless charade. Click the link to read it in full, summarizing it tells the worried Erol victims that Google has no clue at all, frames and JS redirects are great for online shops, and waiting for the next software release providing meaningful URLs will fix everything. Ok, that’s polemic, so here are at least a few quotes:

[…] A number of people have been asking for a little reassurance on the fact that EROL’s x.html pages are getting listed by Google. Below is a list of keyword phrases, with the number of competing pages and the x.html page that gets listed [4 examples provided].
[…]
EROL does use frames to display the store in the browser, however all the individual pages generated and uploaded by EROL are static HTML pages (x.html pages) that can be optimised for search engines. These pages are spidered and indexed by the search engines. Each of these x.html pages have a redirect that loads the page into the store frameset automatically when the page is requested.
[…]
EROL is a JavaScript shopping cart, however all the links within the store (links to other EROL pages) that are added using EROL Link Items are written to the static HTML pages as a standard <a href=”"> links - not a JavaScript link. This helps the search engines spider other pages in your store.

The ’sneaky re-directs’ being discussed most likely relate to an older SEO technique used by some companies to auto-forward from an SEO-optimised page/URL to the actual URL the site-owner wants you to see.

EROL doesn’t do this - EROL’s page load actually works more like an include than the redirect mentioned above. In its raw form, the ‘x123.html’ page carries visible content, readable by the search engines. In it’s rendered form, the page loads the same content but the JavaScript rewrites the rendered page to include page and product layout attributes and to load the frameset. You are never redirected to another html page or URL. [Not true, the JS function displayPage() changes the location of all pages indexed by Google, and property names like ‘hidepage’ speak for themselves. Example: x999.html redirects to erol.html#999×0&&]
[…]
We have, for the past 6 months, been working with search engine optimisation experts to help update the code that EROL writes to the web page, making it even more search engine friendly.

As part of the recommendations suggested by the SEO experts, pages names will become more search engine friendly, moving way from page names such as ‘x123.hml’ to ‘my-product-page-123.html’. […]

Still in friendly and helpful mood I wrote a reply:

With all respect, if I understand your post correctly that’s not going to solve the problem.

As long as a crawlable URL like http://www.example.com/x123.html or http://www.example.com/product-name-123.html resolves to
http://www.example.com/erol.html#123×0&& or whatever that’s a violation of Google’s quality guidelines. Whether you call that redirect sneaky (Google’s language) or not that’s not the point. It’s Google’s search engine, so their rules apply. These rules state clearly that pages which do a JS redirect to another URL (on the same server or not, delivering the same contents or not) do not get indexed, or, if discovered later on, get deindexed.

The fact that many x-pages are still indexed and may even rank for their targeted keywords means nothing. Google cannot discover and delist all pages utilizing a particular disliked technique overnight, and never has. Sometimes that’s a process lasting months or even years.

The problem is, that these redirects put your customers at risk. Again, Google didn’t change its Webmaster guidelines which forbid JS redirects since the stone age, it has recently changed its ability to discover violations in the search index. Google does frequently improve its algos, so please don’t expect to get away with it. Quite the opposite, expect each and every page with these redirects vanishing over the years.

A good approach to avoid Google’s cloaking penalties is utilizing one single URL as spider fodder as well as content presentation to browsers. When a Googler loads such a page with a browser and compares the URL to the spidered one, you get away with nearly everything CSS and JS can accomplish — as long as the URLs are identical. If OTOH the JS code changes the location you’re toast.

Posting this response failed, because Erol’s forum admin banned me after censoring my previous post. By the way according to posts outside their sphere and from what I’ve seen watching the discussion they censor posts of customers too. Well, that’s fine with me since that’s Erol’s forum and they make the rules. However, still eager to help I emailed my reply to Erol, and to Erol customers asking for my take on Erol’s final statement.

You ask why I post this long winded stuff? Well, it seems to me that Erol prefers a few fast bucks over satisfied customers, thus I fear they will not tell their cutomers the truth. Actually, they simply don’t get it. However, I don’t care whether their intention to prevaricate is greed or ignorance, I really don’t know, but all the store operators suffering from Google’s penalties deserve the information. A few of them have subscribed to my feed, so I hope my message gets spread. Continuation

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Good Bye Nofollow: How to DOfollow comments with blogger

Andy Beard pointed me to a neat procedure to DOFOLLOW links in blog comments with blogger.com: Remove Nofollow Attribute on Blogger.com Blog Comments:

Edit the template’s HTML and remove “rel=’nofollow’” in this line:
<a expr:href='data:comment.authorUrl' rel='nofollow'><data:comment.author/></a>

Now I’ve a good reason to upgrade the software. Sadly I’ve hacked the template so badly, I doubt it will work with the new version :(



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google deindexing Erol driven ecommerce sites

Follow-up post - see why e-commerce software sucks.

Erol is a shopping cart software invented by DreamTeam, a UK based Web design firm. One of its core features is the on-the-fly conversion of crawlable HTML pages to fancy JS driven pages. Looks great in a JavaScript-enabled browser, and ugly w/o client sided formatting.

Erol, offering not that cheap SEO services itself, claims that it is perfectly OK to show Googlebot a content page without gimmicks, whilst human users get redirected to another URL.

Erol victims suffer from deindexing of all Erol-driven pages, Google just keeps pages in the index which do not contain Erol’s JS code. Considering how many online shops make use of Erol software in the UK, this massive traffic drop may have a visible impact on the gross national product ;) … Ok, sorry, kidding with so many businesses at risk does not amuse the Queen.

Dear “SEO experts” at Erol, could you please read Google’s quality guidelines:

· Don’t […] present different content to search engines than you display to users, which is commonly referred to as “cloaking.”
· Don’t employ cloaking or sneaky redirects.
· If a site doesn’t meet our quality guidelines, it may be blocked from the index.

Google did your customers a favour by not banning their whole sites, probably because the standard Erol content presentation technique is (SEO-wise) just amateurish, not caused by deceitful intent. So please stop whining

We are currently still investigating the recent changes Google have made which have caused some drop-off in results for some EROL stores. It is as a result of the changes by Google, rather than a change we have made in the EROL code that some sites have dropped. We are investigating all possible reasons for the changes affecting some EROL stores and we will, of course, feedback any definitive answers and solutions as soon as possible.

and listen to your customers stating

Hey Erol Support
Maybe you should investigate doorway pages with sneaky redirects? I’ve heard that they might cause “issues” such as full bans.

Tell your victims customers the truth, they deserve it.

Telling your customers that Googlebot crawling their redirecting pages will soon result in reindexing those is plain false by the way. Just because the crawler fetches a questionable page that doesn’t mean that the indexing process reinstates its accessibility for the query engine. Googlebot is just checking whether the sneaky JavaScript code was removed or not.

Go back to the whiteboard. See a professional SEO. Apply common sense. Develop a clean user interface pleasing human users and search engine robots as well. Without frames, sneaky respectively superfluous JavaScript redirects, and amateurish BS like that. In the meantime provide help and work arounds (for example a tutorial like “How to build an Erol shopping site without page loading messages which will result in search engine penalties”), otherwise you don’t need the revamp because your customer base will shrink to zilch.

Update: It seems that there’s a patch available. In Erol’s support forum member Craig Bradshaw posts “Erols new patch and instructions clearly tell customers not to use the page loading messages as these are no longer used by the software.”.

Tags: ()

Related links:
Matt Cutts August 19, 2005: “If you make lots of pages, don’t put JavaScript redirects on all of them … of course we’re working on better algorithmic solutions as well. In fact, I’ll issue a small weather report: I would not recommend using sneaky JavaScript redirects. Your domains might get rained on in the near future.”
Matt Cutts December 11, 2005: “A sneaky redirect is typically used to show one page to a search engine, but as soon as a user lands on the page, they get a JavaScript or other technique which redirects them to a completely different page.”
Matt Cutts September 18, 2005: “If […] you employ […] things outside Google’s guidelines, and your site has taken a precipitous drop recently, you may have a spam penalty. A reinclusion request asks Google to remove any potential spam penalty. … Are there […] pages that do a JavaScript or some other redirect to a different page? … Whatever you find that you think may have been against Google’s guidelines, correct or remove those pages. … I’d recommend giving a short explanation of what happened from your perspective: what actions may have led to any penalties and any corrective action that you’ve taken to prevent any spam in the future.”
Matt Cutts July 31, 2006: “I’m talking about JavaScript redirects used in a way to show users and search engines different content. You could also cloak and then use (meta refresh, 301/302) to be sneaky.”
Matt Cutts December 27, 2006 and December 28, 2006: “We have written about sneaky redirects in our webmaster guidelines for years. The specific part is ‘Don’t employ cloaking or sneaky redirects.’ We make our webmaster guidelines available in over 10 different languages … Ultimately, you are responsible for your own site. If a piece of shopping cart code put loads of white text on a white background, you are still responsible for your site. In fact, we’ve taken action on cases like that in the past. … If for example I did a search […] and saw a bunch of pages […], and when I clicked on one, I immediately got whisked away to a completely different url, that would be setting off alarm bells ringing in my head. … And personally, I’d be talking to the webshop that set that up (to see why on earth someone would put up pages like that) more than talking to the search engine.”

Matt Cutts heads Google’s Web spam team and has discussed these issues since the stone age at many places. Look at the dates above, penalties for cloaking / JS redirects are not a new thing. The answer to “It is as a result of the changes by Google, rather than a change we have made in the EROL code that some sites have dropped.” (Erol statement) is: Just because you’ve got away so long that does not mean that JS redirects are fine with Google. The cause of the mess is not a recent change of code, it’s the architecture by itself which is considered “cloaking / sneaky redirect” by Google. Google recently has improved its automated detection of client sided redirects, not its guidelines. Considering that both Erol created pages (the crawlable static page and the contents served by the URL invoked by the JS redirect) present similar contents, Google will have sympathy for all reinclusion requests, provided that the sites in question were made squeaky-clean before.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google’s Anchor Text Reports Encourage Spamming and Scraping

Despite the title-bait Google’s new Webmaster Toy is an interesting tool, thanks! I’ll look at it from a Webmaster’s perspective, all SEO hats in the wardrobe.

Danny tells us more, for example the data source:

A few more details about the anchor text data. First, it comes only from external links to your site. Anchor text you use on your own site isn’t counted.
Second, links to any subdomains you have are NOT included in the data.

From a quick view I doubted that Google filters internal links properly. So I’ve checked a few sites and found internal anchor text leading the new anchor text stats. Well, it’s not unusual that folks copy and paste links including anchor text. Using page titles as anchor text is also business as usual. But why the heck do page titles and shortcuts from vertical menu bars appear to be the most prominent external anchor text? With large sites having tons of inbounds that seems not so easy to investigate.

Thus I’ve looked at a tiny site of mine, this blog. It carries a few recent bollocks posts which were so useless that nobody bothered linking, I thought. Well, partly that’s the case, there were no links by humans, but enough fully automated links to get Ms. Googlebot’s attention. Thanks to scrapers, RSS aggregators, forums linking the author’s recent blog posts and so on, everything on this planet gets linked, so duplicated post titles make it in the anchor text stats.

Back to larger sites I found out that scraper sites and indexed search results were responsible for the extremely misleading ordering of Google’s anchor text excerpts. Both page types should not get indexed in the first place, and it’s ridiculous that crappy data, respectively irrelevantly accumulated data, dilute a well meant effort to report inbound linkage to Webmasters.

Unweighted ordering of inbound anchor text by commonness and limiting the number of listed phrases to 100 makes this report utterly useless for many sites, and so much the worse it sends a strong but wrong signal. Importance in the eye of the beholder gets expressed by the top of an ordered list or result set.

Joe Webmaster tracking down the sources of his top-10 inbound links finds a shitload of low-life pages, thinks hey, linkage is after all a game of large numbers when Google says those links are most important, launches a gazillion of doorways and joins every link farm out there. Bummer. Next week Joe Webmaster pops up in the Google Forum telling the world that his innocent site got tanked because he followed Google’s suggestions and we’re bothered with just another huge but useless thread discussing whether scraper links can damage rankings or not, finally inventing the “buffy blonde girl pointy stick penalty” on page 128.

Roundtrip to hat rack, SEO hat attached. I perfectly understand that Google is not keen on disclosing trust/quality/reputation scores. I can read these stats because I understand the anatomy (intention, data, methods, context), and perhaps I can even get something useful out of them. Explaining this to an impatient site owner who unwillingly bought the link quality counts argument but still believes that nofollow’ed links carry some mysterious weight because they appear in reversed citation results is a completely other story.

Dear Google, if you really can’t hand out information on link weighting by ordering inbound anchor text by trust or other signs of importance, then can you please at least filter out all the useless crap? This shouldn’t be that hard to accomplish, since even simple site-search queries for “scraping” reveal tons of definitely not index-worthy pages which already do not pass any search engine love to the link destinations. Thanks in advance!

When I write “Dear Google”, can Vanessa hear me? Please :)

Tags: ()

Update: Check your anchor text stats page every now and then, don’t miss out on updates :)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28  Next Page »