The Vanessa Fox Memorial

I was quite shocked when Vanessa told me that she’s leaving Google to join Zillow. That’s a big loss for Google, and a big loss for the Webmaster/SEO community relying on Google. And that’s a great enrichment for Zillow. I’m dead sure they can’t really imagine how lucky they are. And they better treat her very well, or Vanessa’s admirers will launch a firestorm which Rommel, Guderian, et al couldn’t have dreamed of when they’ve invented the blitz. Yep, at first sight that was sad news.

But it’s good news for Vanessa, she’s excited of “an all-new opportunity to work on the unique challenges of the vertical and local search space at Zillow”. I wish her all the best at Zillow and I hope that this challenge will not morph her into an always too tired caffeine junky (again) ;)

Back in 2005/2006 when I interviewed Vanessa on her pet sitemaps, her blogger profile said “technical writer in Kirkland” (from my POV an understatement), now she leaves Google as a prominent product manager, well known and loved by colleagues, SEOs and Webmasters around the globe. She created the Vanessa Fox Memorial aka “Google Webmaster Central” and handed her baby over to a great team she gathered and trained to make sure that Google’s opening to Webmasters evolves further. Regardless her unclimbable mount email Vanessa was always there to help, fix and clarify things, and open to suggestions even on minor details. She’s a gem, an admirable geek, a tough and lovably ideal of a Googler, and now a Zillower. Again, all the best, keep in touch, and

Thank You Vanessa!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Which Sebastian Foss is a spammer?

Obviously pissed by my post Fraud from the desk of Sebastian Foss, Sebastian Foss sent this email to Smart-IT-Consulting.com:

Remove your insults from your blog about my products and sites… as you may know promote-biz.net is not registered to my name or my company.. just look it up in some whois service. This is some spammer who took my software and is now selling it on his spammer websites. Im only selling my programs under their original .com domains and you did not receive any email from me since im only using doube-optin lists.

You may not know it - but insulting persons and spreading lies is under penalty.

Sebastian Foss
Sebastian Foss e-trinity Marketing Inc.
sebastian@etrinity-mail.com

Well, that’s my personal blog, and I’ve a professional opinion about the software Sebastian Foss sells, more on that later. It’s public knowledge that spammers do register domains under several entities to obfuscate their activities. I’m not a fed, and I’m not willing to track down each and every multiple respectively virtual personality of a spammer, so I admit that there’s at least a slight possibility that the Sebastian Foss spamming my inbox from promote-biz.net is not the Sebastian Foss who wrote and sells the software promoted by the email spammer Sebastian Foss. Since I still receive email spam from the desk of Sebastian Foss at promote-biz.net, I think there’s no doubt that this Sebastian Foss is a spammer. Well, Sebastian Foss himself calls him a spammer, and so do I. Confused? So am I. I’ll update my other post to reflect that.

Now that we’ve covered the legal stuff, lets look at the software from the desk of Sebastian Foss.

  • Blog Blaster claims to submit “ads” to 2,000,000 sites. Translation: Blog Blaster automatically submits promotional comments to 2 million blogs. The common description of this kind of “advertising” is comment spam.
    Sebastian Foss tells us that “Blog Blaster will automatically create thousands of links to your website - which will rank your website in a top 10 position!”. The common description of this link building technique is link spam.
    The sales pitch signed by Sebastian Foss explains “I used it [Blog Blaster] to promote my other website called ezinebroadcast.com and Blog Blaster produced thousands of links to ezinebroadcast.com - resulting in a #1 position in Google for the term “ezine advertising service”. So I understand that Sebastian Foss admits that he is a comment spammer and a link spammer.
    I’d like to see the written permissions of 2,000,000 bloggers allowing Sebastian Foss and his customers to spam their blogs: “Advertising using Blog Blaster is 100% SPAM FREE advertising! You will never be accused of spamming. Your ads are submitted to blogs whose owners have agreed to receive your ads.” Laughable, and obviously a lie. Did Sebastian Foss remember that “spreading lies is under penalty”? Take care, Sebastian Foss!
  • Feed Blaster with a very similar sales pitch aims to create the term feed spam. Also, it seems that FeedBlaster™ is a registered trademark of DigitalGrit Inc. And I don’t think that Microsoft, Sun and IBM are happy to spot their logos on Sebastian Foss’ site e-trinity Internetmarketing GmbH
  • The Money License System aka Google Cash Machine seems to slip through a legal loophole. May be it’s not explicit illegal to sell software build to to trick Google Adwords respectively AdSense or ClickBank, but using it will result in account terminations and AFAIK legal actions too.
  • Instant Booster claims to spam search engines, and it does, according to many reports. The common term applied to those techniques is web spam.

All these domains (and there are countless more sites selling similar scams from the desk of Sebastian Foss) are registered by Sebastian Foss respectively his companies e-trinity Internetmarketing GmbH or e-trinity Marketing Inc.

He’s in the business of newsgroup spam, search engine spam, comment spam … probably there’s no target left out. Searching for Sebastian Foss scam and similar search terms leads to tons of rip-off reports.

He’s even too lazy to rephrase his sales pitches, click a few of the links provided above, then search for quoted phrases you saw on every sales pitch to get the big picture. All that may be legal in Germany, I couldn’t care less, but it’s not legit. Creating and selling software for the sole purpose of spamming makes the software vendor a spammer. And he’s proud of it. He openly admits that he uses his software to spam blogs, search engines, newsgroups and whatever. He may make use of affiliates and virtual entities who send out the email spam, perhaps he got screwed by a chinese copycat selling his software via email spam, but is that relevant when the product itself is spammy?

What do you think, is every instance of Sebastian Foss a spammer? Feel free to vote in the comments.

Update 08/01/2007 Here is the next email from the desk of Sebastian Foss:

Hi,
thanks for the changes on your blog entry - however like i mentioned if you look up the domains which were advertised in the spam mails you will notice that they are not registered to me or my company. You can also see that visiting the sites you will see some guy took my products and is selling them for a lower price on his own websites where he is also copying all of my graphic files. The german police told me that they are receiving spam from your forms and that it goes directly to their trash… however please remove your entries about me from your blog - There is no sense in me selling my own products for a lower price on some cheap, stolen websites - if that would make sense then why do i have my own .com domains for my products ? I just want to make clear that im not sending out any spam mails - please get back to me.

Thanks,
Sebastian

Sebastian Foss
e-trinity Internetmarketing GmbH
sebastian@etrinity-mail.com

It deserves just a short reply:

It makes perfect sense to have an offshore clone in China selling the same outdated and pretty much questionable stuff a little cheaper. This clone can do that because first there’s next to no costs like taxes and so on, and second he does it per spamming my inbox on a daily base, hence probably he sells a lot of the ‘borrowed’ stuff. Whether or not the multiple Sebastian Fosses are the same natural person is not my problem. I claim nothing but leave it up to you dear reader’s speculation, common sense, and probability calculation.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Another way to implement a site search facility

Providing kick ass navigation and product search is the key to success for e-commerce sites. Conversion rates highly depend on user friendly UIs which enable the shopper to find the desired product with a context sensitive search in combination with few drill-down clicks on navigational links. Unfortunately, the build-in search as well as navigation and site structure of most shopping carts simply sucks. Every online store is different, hence findability must be customizable and very flexible.

I’ve seen online shops crawling their product pages with a 3rd party search engine script because the shopping cart’s search functionality was totally and utterly useless. Others put fantastic efforts in self made search facilities which perfectly implement real life relations beyond the limitations of the e-commerce software’s data model, but need code tweaks for each and every featured product, specials, virtual shops assembling a particular niche from several product lines or whatever. Bugger.

Today I stumbled upon a very interesting approach which could become the holy grail for store owners suffering from crappy software. Progress invited me to discuss a product they’ve bought recently –EasyAsk– from a search geek’s perspective. Long story short, I was impressed. Without digging deep into the technology or reviewing implementations for weaknesses I think the idea behind that tool is promising.

Unfortunately, the EasyAsk Web site doesn’t provide solid technical and architectural information (I admit that I may have missed the tidbits within the promotional chatter), hence I try to explain it from what I’ve gathered today. Progress EasyAsk is a natural language interface connecting users to data sources. Users are shoppers, and staff. Data sources are (relational) databases, or data access layers (that is a logical tier providing a standardized interface to different data pools like all sorts of databases, (Web) services, an enterprise service bus, flat files, XML documents and whatever).

The shopper can submit natural language queries like “yellow XS tops under 30 bucks”. The SRP is a page listing tops and similar garments under 30.00$, size XS, illustrated with thumbnails of pics of yellow tops and bustiers, linked to the product pages. If yellow tops in XS are sold out, EasyAsk recommends beige tops instead of delivering a sorry-page. Now when a search query is submitted from a page listing suits, a search for “black leather belts” lists black leather belts for men. If the result set is too large and exceeds the limitations of one page, EasyAsk delivers drill-down lists of tags, categories and synonyms until the result set is viewable on one page. The context (category/tag tree) changes with each click and can be visualized for example as bread crumb nav link.

Technically spoken, EasyAsk does not deal with the content presentation layer itself. It returns XML which can be used to create a completely new page with a POST/GET request, or it gets invoked as AJAX request whose response just alters DOM objects to visualize the search results (way faster but not exactly search engine friendly - that’s not a big deal because SERPs shouldn’t be crawlable at all). Performance is not an issue from what I’ve seen. EasyAsk caches everything so that the server doesn’t need to bother the hard disk. All points of failure (WRT performance issues) belong to the implementation, thus developing a well thought out software architecture is a must-have.

Well, that’s neat, but where’s the USP? EasyAsk comes with a natural language (search) driven admin interface too. That means that product managers can define and retrieve everything (attributes, synonyms, relations, specials, price ranges, groupings …) using natural language. “Gimme gross sales of leather belts for men II/2007 compared to 2006″ delivers a statistic and “top is a synonym for bustier and the other way round” creates a relation. The admin interface runs in the Web browser, definitions can be submitted via forms and all admin functions come with previews. Really neat. That reduces the workload of the IT dept. WRT ad-hoc queries as well as for lots of structural change requests, and saves maintenance costs (Web design / Web development).

I’ve spotted a few weak points, though. For example in the current version the user has to type in SKUs because there’s no selection box. Or meta data are stored in flat files, but that’s going to change too. There’s no real word stemming, EasyAsk handles singular/plural correctly and interprets “bigger” as “big” or “xx-large” politically correct as “plus”, but typos must be collected from the “searches without results” report and defined as synonym. The visualization of concurrent or sequentially applied business rules is just rudimentary on preview pages in the admin interface, so currently it’s hard to track down why particular products get downranked respectively highlighted when more than one rule applies. Progress told me that they’ll make use of 3rd party tools as well as in house solutions to solve these issues in the near future - the integration of EasyAsk into the Progress landscape has just begun.

The definitions of business language / expected terms used by consumers as well as business rules are painless. EasyAsk has build-in mappings like color codes to common color names and vice versa, understands terms like “best selling” and “overstock”, and these definitions are easy to extend to match actual data structures and niche specific everyday language.

Setting up the product needs consultancy (as a consultant I love that!). To get EasyAsk running it must understand the structure of the customer’s data sources, respectively the methods provided to fetch data from various structured as well as unstructured sources. Once that’s configured, EasyAsk pulls (database) updates on schedule (daily, hourly, minutely or whatever). It caches all information needed to fulfill search requests, but goes back to the data source to fetch real time data when the search query requires knowledge of not (yet) cached details. In the beginning such events must be dealt with, but after a (short) while EasyAsk should run smoothly without requiring much technical interventions (as a consultant I hate that, but the client’s IT department will love it).

Full disclosure: Progress didn’t pay me for that post. For attending the workshop I got two books (”Enterprise Service Bus” by David A. Chappel and “Getting Started with the SID” by John P. Reilly) and a free meal, travel expenses were not refunded. I did not test the software discussed myself (yet), so perhaps my statements (conclusions) are not accurate.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Blogger abuses rel-nofollow due to ignorance

I had planned a full upgrade of this blog to the newest blogger version this weekend. The one and only reason to do the upgrade was the idea that I perhaps could disable the auto-nofollow functionality in the comments. Well, what I found was a way to dofollow the author’s link by editing the <dl id='comments-block'> block, but I couldn’t figure out how to disable the auto-nofollow in embedded links.

Considering the hassles of converting all the template hacks into the new format, and the risk of most probably losing the ability to edit code my way, I decided to stick with the old template. It just makes no sense for me to dofollow the author’s link, when a comment author’s links within the content get nofollow’ed automatically. Andy Beard and others will hate me now, so let me explain why I don’t move this blog to my own domain using a not that insane software like WordPress.

  • I own respectively author on various WordPress blogs. Google’s time to index for posts and updates from this blogspot thingy is 2-3 hours (Web search, not blog search). My Wordpress blogs, even with higher PageRank, suffer from a way longer time to index.
  • I can’t afford the time to convert and redirect 150 posts to another blog.
  • I hope that Google/Blogger can implement reasonable change requests (most probably that’s just wishful thinking).

That said, WordPress is a way better software than Blogger. I’ll have to move this blog if Blogger is not able to fulfill at least my basic needs. I’ll explain below why I think that Blogger lacks any understanding of the rel-nofollow semantics. In fact, they throw nofollow crap on everything they get a hand on. It seems to me that they won’t stop jeopardizing the integrity of the Blogosphere (at least where they control the linkage) until they get bashed really hard by a Googler who understands what rel-nofollow is all about. I nominate Matt Cutts, who invented and evolved it, and who does not tolerate BS.

So here is my wishlist. I want (regardless of the template type!)

  • A checkbox “apply rel=nofollow to comment author links”
  • A checkbox “apply rel=nofollow to links within comment text”
  • To edit comments, for example to nofollow links myself, or to remove offensive language
  • A checkbox “apply rel=nofollow to links to label/search pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to label/search pages”
  • A checkbox “apply rel=nofollow to links to archive pages”
  • A checkbox “apply a robots meta tag ‘noindex,follow’ to archive pages”
  • A checkbox “apply rel=nofollow to backlink listings”

As for the comments functionality, I’d understand when these options get disabled when comment moderation is set to off.

And here are the nofollow-bullshit examples.

  • When comment moderation and captchas are activated, why are comment author links as well as links within the comments nofollow’ed? Does blogger think their bloggers are minor retards? I mean, when I approve a comment, then I do vouch for it. But wait! I can’t edit the comment, so a low-life link might slip through. Ok, then let me edit the comments.
  • When I’ve submitted a comment, the link to the post is nofollowed. Nofollow insane II.This page belongs to the blog, so why the fudge does Blogger nofollow navigational links? And if it makes sense for a weird reason not understandable by a simple webmaster like me, why is the link to the blog’s main page as well as the link to the post one line below not nofollow’ed? Linking to the same URL with and without rel-nofollow on the same page deserves a bullshit award.
  • Nofollow insane III. (dashboard)On my dashbord Blogger features a few blogs as “Blogs Of Note”, all links nofollow’ed. These are blogs recommended by the Blogger crew. That means they have reviewed them and the links are clearly editorial content. They’re proud of it: “we’ve done a pretty good job of publishing a new one each day”. Blogger’s very own Blogs Of Note blog does not nofollow the links, and that’s correct.

    So why the heck are these recommended blogs nofollow’ed on the dashboard? Nofollow insane III. (blogspot)

  • Blogger inserted robots meta tags “nofollow,noindex” on each and every blog hosted outside the controlled blogspot.com domain earlier this year.
  • Blogger inserted robots meta tags “nofollow,noindex” on Google blogs a few days ago.

If Blogger’s recommendation “Check google.com. (Also good for searching.)” is a honest one, why don’t they invest a few minutes to educate themselves on rel-nofollow? I mean, it’s a Google-block/avoid-indexing/ranking-thingy they use to prevent Google.com users from finding valuable contents hosted on their own domains. And they annoy me. And they insult their users. They shouldn’t do that. That’s not smart. That’s not Google-ish.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google to kill the power of links

Well, a few types of links will survive and don’t do evil in Google’s search index ;)    I’ve updated my first take on Google’s updated guidelines stating paid links and reciprocal links are evil. Well, regardless whether one likes or dislikes this policy, it’s already factored in - case closed by Google. There are so many ways to generate natural links …

The official call for paid-link reports is pretty much disliked across the boards:
Google is Now The Morality Police on the Internet
Google’s Ideal Webmaster: Snitch, Rake It In And Don’t Deliver
Other sites can hurt your ranking
Google’s Updated Webmaster Guidelines Addresses Linking Practices
Google clarifies its stance on links

More information, and discussion of paid/exchanged links in my pamphlets:
Matt Cutts and Adam Lasnik define “paid link”
Where is the precise definition of a paid link?
Full disclosure of paid links
Revise your linkage
Link monkey business is not worth a whoop
Is buying and selling links risky? (02/2006)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Danny Sullivan did not strip for Matt Cutts

Nope, this is not recycled news. I’m not referring to Matt asking Danny to strip off his business suit, although the video is really funny. I want to comment on something Matt didn’t say recently, but promised to do soon (again).

Danny Sullivan stripped perfectly legit code from Search Engine Land because he was accused to be a spammer, although the CSS code in question is in no way deceitful.

StandardZilla slams poor Tamar just reporting a WebProWorld thread, but does an excellent job in explaining why image replacement is not search engine spam but a sound thing to do. Google’s recently updated guidelines need to tell more clearly that optimizing for particular user agents is not considered deceitful cloaking per se. This would prevent Danny from stripping (code) not for Matt or Google but for lurid assclowns producing canards.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Blasting mount email

I’ve moved 5k emails, mostly unread, from my inbox to a “swamped” folder. I hope a couple of new filters will help avoiding such drastic measures in the future. So if I owe you an answer: I apologize, please resend your message. Thanks.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google enhances the quality guidelines

Maybe todays update of Google’s quality guidelines is the first phase of the Webmaster help system revamp project. I know there’s more to come, Google has great plans for the help center. So don’t miss out on the opportunity to tell Google’s Webmaster Central team what you’d like to have added or changed. Only 14 replies to this call for input is an evidence of incapacity, shame on the Webmasters community.

I haven’t had the time to write a full-blown review of the updates, so here are just a few remarks from a Webmaster’s perspective. Scroll down to Quality guidelines - specific guidelines to view the updates, that means click the links to the new (sometimes overlapping) detail pages.

As always, the guidelines outline best practices of Web development, refer to common sense, and don’t encourage over-interpretations (not that those are avoidable, nor utterly useless). Now providing Webmasters with more explanatory directives, detailed definitions and even examples in the “Don’ts” section is very much appreciated. Look at the over five years old first version of this document before you bitch ;)

Avoid hidden text or hidden links
The new help page on hidden text and links is descriptive and comes with examples, well done. What I miss is a hint with regard to CSS menus and other content which is hidden until the user performs a particular action. Google states “Text (such as excessive keywords) can be hidden in several ways, including […] Using CSS to hide text”. The same goes for links by the way. I wish they would add something in the lines of “… Using CSS to hide text in a way that a user can’t visualize it by a common action like moving the mouse over a pointer to a hidden element, or clicking a text link or descriptive widget or icon”. The hint at the bottom “If you do find hidden text or links on your site, either remove them or, if they are relevant for your site’s visitors, make them easily viewable” comes close to this but lacks an example.

Susan Moskwa from Google clarifies what one can hide with CSS, and what sorts of CSS hidden stuff is considered a violation of the guidelines, in the Google forum on June/11/2007:

If your intent in hiding text is to deceive the search engines, we frown on that; if your intent is purely to improve the visual user experience (e.g. by replacing some text with a fancier image of that same text), you don’t need to worry. Of course, as with many techniques, there are shades of gray between “this is clearly deceptive and wrong” and “this is perfectly acceptable”. Matt [Cutts] did say that hiding text moves you a step further towards the gray area. But if you’re running a perfectly legitimate site, you don’t need to worry about it. If, on the other hand, your site already exhibits a bunch of other semi-shady techniques, hidden text starts to look like one more item on that list. […] As the Guidelines say, focus on intent. If you’re using CSS techniques purely to improve your users’ experience and/or accessibility, you shouldn’t need to worry. One good way to keep it on the up-and-up (if you’re replacing text w/ images) is to make sure the text you’re hiding is being replaced by an image with the exact same text.

Don’t use cloaking or sneaky redirects
This sentence in bold red blinking uppercase letters should be pinned 5 pixels below the heading: “When examining […] your site to ensure your site adheres to our guidelines, consider the intent” (emphasis mine). There are so many perfectly legit ways to do the content presentation, that it is impossible to assign particular techniques to good versus bad intent, nor vice versa.

I think this page leads to misinterpretations. The major point of confusion is, that Google argues completely from a search engine’s perspective and dosn’t write for the targeted audience, that is Webmasters and Web developers. Instead of all the talk about users vs. search engines, it should distinguish plain user agents (crawlers, text browsers, JavaScript disabled …) from enhanced user agents (JS/AJAX enabled, installed and activated plug-ins …). Don’t get me wrong, this page gives the right advice, but the good advice is somewhat obfuscated in phrases like “Rather, you should consider visitors to your site who are unable to view these elements as well”.

For example “Serving a page of HTML text to search engines, while showing a page of images or Flash to users [is considered deceptive cloaking]” puts down a gazillion of legit sites which serve the same contents in different formats (and often under different URLs) depending on the ability of the current user agent to render particular stuff like Flash, and a bazillion of perfectly legit AJAX driven sites which provide crawlers and text browsers with a somewhat static structure of HTML pages, too.

“Serving different content to search engines than to users [is considered deceptive cloaking]” puts it better, because in reverse that reads “Feel free to serve identical contents under different URLs and in different formats to users and search engines. Just make sure that you accurately detect the capabilities of the user agent before you decide to alter a requested plain HTML page into a fancy conglomerate of flashing widgets with sound and other good vibrations, respectively vice versa”.

Don’t send automated queries to Google
This page doesn’t provide much more information than the paragraph on the main page, but there’s not that much to explain: don’t use WebPosition Gold™. Period.

Don’t load pages with irrelevant keywords
Tells why keyword stuffing is not a bright idea, nothing to note.

Don’t create multiple pages, subdomains, or domains with substantially duplicate content
This detail page is a must read. It starts with a to the point definition “Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar”, followed by a ton of good tips and valuable information. And fortunately it expresses that there’s no such thing as a general duplicate content penalty.

Don’t create pages that install viruses, trojans, or other badware
Describes Google’s service in partnership with StopBADware.org, highlighting the quickest procedure to get Google’s malware warning removed.

Avoid “doorway” pages created just for search engines, or other “cookie cutter” approaches such as affiliate programs with little or no original content
The info on doorway pages is just a paragraph on the “cloaking and sneaky redirect” page. I miss a few tips on how one can identify unintentional doorway pages created by just bad design, without any deceptive intent. Also, I think a few sentences on thin SERP-like pages would be helpful in this context.

“Little or no original content” targets thin affiliate sites, again doorway pages, auto-generated content, and scraped content. It becomes clear that Google does not love MFA sites.

If your site participates in an affiliate program, make sure that your site adds value. Provide unique and relevant content that gives users a reason to visit your site first
The link points to the “Little or no original content” page mentioned above.


“Buying links in order to improve a site’s ranking is in violation of Google’s webmaster guidelines and can negatively impact a site’s ranking in search results. […] Google works hard to ensure that it fully discounts links intended to manipulate search engine results, such link exchanges and purchased links.”

Basically that means: if you purchase a link, then make dead sure it’s castrated or Google will take away the ability to pass link love from the page (or even site) linking out for green. Or don’t get caught respectively denunciated by competitors (I doubt that’s a surefire tactic for the average Webmaster).

Note that in the second sentence quoted above Google states officially that link exchanges for the sole purpose of manipulating search engines are a waste of time and resources. That means reciprocal links of particular types nullify each other, and site links might have lost their power too. <speculation>Google may find it funny to increase the toolbar PageRank of pages involved in all sorts of link swap campaigns, but the real PageRank will remain untouched.</speculation>

There’s much confusion with regard to “paid link penalties”. To the best of my knowledge the link’s destination will not be penalized, but the paid link(s) will not (or no longer) increase its reputation, so that in case the link’s intention got reported or discovered ex-post its rankings may suffer. Penalizing the link buyer would not make much sense, and Googlers are known as pragmatic folks, hence I doubt there is such a penalty. <speculation>Possibly Google has a flag applied to known link purchasers (sites as well as webmasters), which –if it exists– might result in more scrupulous judgements of other optimization techniques.</speculation>

What I really like is that the Googlers in charge honestly tried to write for their audience, that is Webmasters and Web developers, not (only) search geeks. Hence the news is that Google really cares. Since the revamp is a funded project, I guess the few paragraphs where the guidelines are still mysterious (for the great unwashed), or even potentially misleading, will get an update soon. I can’t wait for the next phase of this project.

Vanessa Fox creates buzz at SMX today, so I’ll update this post when (if?) she blogs about the updates later on (update: Vanessa’s post). Perhaps Matt Cutts will comment the updated quality guidelines at the SMX conference today, look for Barry’s writeup at Search Engine Land, and SEO Roundtable as well as the Bruce Clay blog for coverage of the SMX Penalty Box Summit. Marketing Pilgrim covered this session too. This post at Search Engine Journal provides related info, and more quotes from Matt. Just one SMX tidbit: according to Matt they’re going to change the name of the re-inclusion request to something like a reconsideration request.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Hassles of submitting a blogspot XML-sitemap

Usually my posts make it into Google’s Web index within 2-3 hours, but not yesterday. Since Ms. Googlebot became lazy fetching my pamphlets, I thought she needs a hint. With one of the last updates Blogger’s feed URLs have changed, but lazy as I am I’ve still the ancient ATOM feed in my sitemaps account. So I grabbed the new URL from the LINK element in HEAD and submitted it as sitemap. Bugger me. Not enough tea this morning. I didn’t look at the URL, just copied and pasted it, then submitted the feed to no avail. Oups. Here is why it didn’t work:

Blogger.com doesn’t come with build-in XML sitemaps, but one can use the feeds. That’s definitely not a perfect solution, because the feeds list only a few recent posts, but better than nothing.

Here are the standard feed URLS of a blogger blog at blogspot.com (replace “sebastianx” with your subdomain):
http://sebastianx.blogspot.com/feeds/posts/default (ATOM, posts)
http://sebastianx.blogspot.com/feeds/posts/default?alt=rss (RSS, posts)
http://sebastianx.blogspot.com/feeds/comments/default (ATOM, comments)

None of these can be used as a sitemap, because the post-URLs are not located under the sitemap path.

Fortunately, the old feeds still work, although they are served as “text/html” what can confuse things, so I’ve to stick with http://sebastianx.blogspot.com/atom.xml as “sitemap”.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google nofollow’s itself

Awesome. Nofollow-insane at its best. Check the source of Google’s Webmaster Blog. In HEAD you’ll find an insane meta tag:
<meta name=”ROBOTS” content=”NOINDEX,NOFOLLOW” />

Well, that’s one of many examples. Read the support forums. Another case of Google nofollow’ing herself: Google fun

Matt thought that all teams understood the syntax and semantics of rel-nofollow. It seems to me that’s not the case. I really can’t blame Googlers applying rel-nofollow or even nofollow/noindex meta tags to everything they get a hand on. It is not understandable. It’s not useable. It’s misleading. It’s confusing. It should get buried asap.

Hat tip to John (JLH’s post).

Update 1: A friendly Googler just told me that a Blogger glitch (pertaining only Google blogs) inserted the crawler-unfriendly meta element, it should be solved soon. I thought this bug was fixed months ago ... if page.isPrivate == true by mistake then insert “<meta content=’NOINDEX,NOFOLLOW’ name=’ROBOTS’ />” … (made up)

Update 2: The ‘noindex,nofollow’ robots meta tag is gone now, and the Webmaster Central Blog got a neat new logo:
Google Webmaster Central Blog - Offic'ial news on crawling and indexing sites for the Google index (I’d add ALT and TITLE text: alt="Google Webmaster Central Blog - Official news on crawling and indexing sites for the Google index" title="Official news on crawling and indexing sites for the Google index")



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29  Next Page »