Danny Sullivan did not strip for Matt Cutts

Nope, this is not recycled news. I’m not referring to Matt asking Danny to strip off his business suit, although the video is really funny. I want to comment on something Matt didn’t say recently, but promised to do soon (again).

Danny Sullivan stripped perfectly legit code from Search Engine Land because he was accused to be a spammer, although the CSS code in question is in no way deceitful.

StandardZilla slams poor Tamar just reporting a WebProWorld thread, but does an excellent job in explaining why image replacement is not search engine spam but a sound thing to do. Google’s recently updated guidelines need to tell more clearly that optimizing for particular user agents is not considered deceitful cloaking per se. This would prevent Danny from stripping (code) not for Matt or Google but for lurid assclowns producing canards.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

5 Comments to "Danny Sullivan did not strip for Matt Cutts"

  1. Craig on 7 June, 2007  #link

    Lurid Assclowns is a pretty good description.

    A better one might be know nothing self-appointed Google Stormtroopers following imagined orders.

    Danny Sullivan supposedly having to remove a valid image replacement technique is about the most ridiculous thing I have heard of in a long time.

    Next thing you know, having a keyword on your page more than once, or even once will be considered “bad” because some people try to stuff keywords.

    I guess we give them an A+ for reading but an F- for reading comprehension.

  2. Webnauts on 5 February, 2008  #link

    Lurid Assclowns is a pretty good description? A better one might be know nothing self-appointed Google Stormtroopers following imagined orders?

    I am not sure about that. Why?

    Danny Sullivan supposedly having to remove a valid image replacement technique…

    Danny claimed himself that it was not a valid image replacement: WPW

    …is about the most ridiculous thing I have heard of in a long time.

    I think you might would like to recall. Don’t you think?

    I guess we give them an A for reading but an F- for reading comprehension.

    Don’t you think you should update your gradings?

  3. Sebastian on 5 February, 2008  #link

    Webnauts, Craig’s gradings are spot on. Judging from Google’s Webmaster guidelines there was nothing wrong with the linked heading behind the image. Both showed the same contents, and both were linked to the same destination. There was no keyword stuffing involved.

  4. Webnauts on 5 February, 2008  #link

    Craig’s gradings are spot on? Hm…

    I am fully aware of such kind of image replacement techniques, as I also use them myself to enhance accessibility where necessary.

    Susan Moska at Google stated:

    As the Guidelines say, focus on intent. If you’re using CSS techniques purely to improve your users’ experience and/or accessibility, you
    shouldn’t need to worry. One good way to keep it on the up-and-up (if you’re replacing text w/ images) is to make sure the text you’re hiding is being replaced by an image with the exact same text.

    Google Groups

    So, it’s not the image replacement method that is in question. Isn’t it a fact that each page should have a unique H1 tag? Doesn’t Google see using the same h1 tag site wide as a spamdexing method? And using for example key phrases like “Search Engine …”?

    And after all, if that was not a spamdexing method, why did Danny claim that it was a mistake of his designer and he got rid of them?

    Did I miss something?

  5. Sebastian on 5 February, 2008  #link

    Webnauts, what you miss is:

    The text link and the image came with the same text, and the same link destination. “Search Engine Land” is the site’s name, not a stuffed keyword phrase.

    Site-wide H1 tags are not spammy, and by no means a spamindexing method. Also, on-the-page optimization techniques like keyword stuffing in H elements don’t fit the definition of spamindexing. You’re spamindexing when you flood search indexes with gazillions of doorway pages. SEL never hosted a single DWP.

    It’s not a fact that every page out there must have a unique H1 tag. Yeah, you said “should”, and you’re right, but that’s not the point. Look at 90+ percent of all WordPress blogs out there that use a site-wide H1 tag for formatting the blog’s name, often in a way that a logo image “obfuscates” the text link in the H1 element. Search engines have learned that the unique page title that corresponds to the TITLE element in HEAD can be found in a H2 or H3 element, or even in a bolded P element or so. Nothing spammy about that. Of course it’s somewhat lazy to stick with crappy blog templates that use H elements for formatting purposes, but why should bloggers suffer from BS developed by silly|lazy|ignorant|… theme designers?

    Danny is a way too nice guy. I would have killed such clueless idiots that accuse me of spamming for no apparent reason. IIRC he changed this totally legit piece of code just because he was accused of spamming, and the method would be somewhat fishy at least if he would have stuffed the H1 element with keywords. In fact, he didn’t violate any of Google’s guidelines. He called it a “mistake” because the method applied with deceitful intent would be spammy, and Search Engine Land shouldn’t make use of code that’s not categorical “clean”.

Leave a reply


[If you don't do the math, or the answer is wrong, you'd better have saved your comment before hitting submit. Here is why.]

Be nice and feel free to link out when a link adds value to your comment. More in my comment policy.