Archived posts from the 'Webmaster Central' Category

Google helps those who help themselves

And if that’s not enough to survive on Google’s SERPs, try Google’s Webmaster Forum where you can study Adam Lasnik’s FAQ which covers even questions the Webmaster Help Center provides no comprehensive answer for (yet), and where Googlers working in Google’s Search Quality, Webspam, and Webmaster Central teams hang out. Google dumps all sorts of questioners to the forum, where a crowd of hardcore volunteers (aka regulars as Google calls them) invests a lot of time to help out Webmasters and site owners facing problems with the almighty Google.

Despite the sporadic posts by Googlers, the backbone of Google’s Webmaster support channel is this crew of regulars from all around the globe. Google monitors the forum for input and trends, and intervenes when the periodic scandal escalates every once in a while. Apropos scandal … although the list of top posters mentions a few of the regulars, bear in mind that trolls come with a disgusting high posting cadency. Fortunately, currently the signal drowns the noise (again), and I appreciate very much that the Googlers participate more and more.

Some of the regulars like seo101 don’t reveal their URLs and stay anonymous. So here is an incomplete list of folks giving good advice:

If I’ve missed anyone, please drop me a line (I stole the list above from JLH and Red Cardinal, so it’s all their fault!).

So when you’re a Webmaster or site owner, don’t hesitate to post your Google related question (but read the FAQ before posting, and search for your topics), chances are one of these regulars or even a Googler offers assistance. Otherwise when you’re questionless carrying a swag of valuable answers, join the group and share your knowledge. Finally, when you’re a Googler, donate the sites linked above a boost on the SERPs ;)

Micro-meme started by John Honeck, supported by Richard Hearne, Bert Vierstra



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google assists SERP Click-Through Optimization

Big Mama Google in her ongoing campaign to keep her search index clean assists Webmasters with reports allowing click-trough optimization of a dozen or so pages per Web site. Google launched these reports a while ago, but most Webmasters didn’t make the best use of them. Now that Vanessa has revealed her SEO secrets, lets discuss why and how Google helps increasing, improving, and targeting search engine traffic.

Google is not interested in gazillions of pages which rank high for (obscure) search terms but don’t get clicked from the SERPs. This clutter tortures the crawler and indexer, and it wastes expensive resources the query engine could use to deliver better results to the searchers.

Unfortunately, legions of clueless SEOs work hard to increase mount clutter by providing their clients with weekly ranking reports, what leads to even more pages which rank for (potentially money making) search phrases but appear on the SERPs with such crappy titles and snippets that not even a searcher coming with an IQ slightly below a slice of bread clicks them.

High rankings don’t pay the bills, converting traffic from SERPs on the other hand does. A nicely ranking page is an asset, which in most cases just needs a few minor tweaks to attract search engine users (mount clutter contains machine generated cookie-cutter pages too, but that’s a completely other story).

For example unattended pages gaining their SERP position from anchor text of links pointing to them often have a crappy click through rate (CTR). Say you’ve a page about a particular aspect of green widgets, which applies to widgets of all colors. For some reason folks preferring red widgets like your piece and link to it with “red widgets” as anchor text. The page will rank fine for [red widgets], but since “red widgets” is not mentioned on the page this keyword phrase doesn’t appear on the SERP’s snippets, not to speak of the linked title. Search engine users seeking for information on red widgets don’t click the link about green widgets, although it might be the best matching search result.

So here is the click-thru optimization process based on Google’s query stats (it doesn’t work with brand new sites nor more or less unindexed sites, because the data provided in Google’s Webmaster Tools are available, reliable and quite accurate for somewhat established sites only):

Login, choose a site and go to query stats. In an ideal world you’ll see two tables of rather identical keyword lists (all examples made up).

Top search queries Avg.
Pos.
Top SERP clicks Avg.
Pos.
1. web site design 5 1. web site design 4
2. google consulting 4 2. seo consulting 5
3. seo consulting 3 3. google consulting 2
4. web site structures 2 4. internal links 3
5. internal linkage 1 5. web site structure 3
6. crawlability 3 6. crawlability 5

The “Top search queries” table on the left shows positions for search phrases on the SERPs, regardless whether these pages got clicks or not. The “Top search query clicks” table on the right shows which search terms got clicked most, and where the landing pages were positioned on their SERPs. If good keywords appear in the left table but not in the right one, you’ve CTR optimization potentials.

The “average top position” might differ from todays SERPs, and it might differ for particular keywords even if those appear in the same line in both tables. Positioning fluctuation depends on a couple of factors. First, the position is recorded at the run time of each search query during the last 7 days, and within seven days a page can jump up and down on the SERPs. Second, positioning on for example UK SERPs can differ from US SERPs, so an average 3rd position may be a utterly useless value, when a page ranks #1 in the UK and gets a fair amount of traffic from UK SERPs, but ranks #8 on US SERPs and searchers don’t click it because the page is about a local event near Loch Nowhere in the highlands. Hence refine the reports by selecting your target markets in “location”, and if necessary “search type” too. Third, if these stats are generated based on very few searches and even fewer click throughs, they are totally and utterly useless for optimization purposes.

Lets say you’ve got a site with a fair amount of Google search engine traffic, the next step is identifying the landing pages involved (you get only 20 search queries, so the report covers only a fraction of your site’s pages). Pull these data from your referrer stats, or extract SERP referrers from your logs to create a crosstab of search terms from Google’s reports per landing page. Although the click data are from Google’s SERPs, it might make sense to do this job with a broader scope, that is including referrers from all major search engines.

Now perform the searches for your 20 keyword phrases (just click on the keywords on the report) to check how your pages look at the SERPs. If particular landing pages trigger search results for more than one search term, extract them all. Then load your landing page, and view its source. Read your page first rendered in your browser, then check out semantic hints in the source code, for example ALT or TITLE text and stuff like that. Look at the anchor text of incoming links (you can use link stats and anchor text stats from Google, We Build Pages Tools, …) and other ranking factors to understand why Google thinks this page is a good match for the search term. For each page, let the information sink before you change anything.

If the page is not exactly a traffic generator for other targeted keywords, you can optimize it with regard to a better CTR for the keyword(s) it ranks for. Basically that means use the keyword(s) naturally on all page areas where it makes sense, and provide each occurence with a context which hopefully makes it into the SERP’s snippet.

Make up a few natural sentences a searcher might have in mind when searching for your keyword(s). Write them down. Order them by their ability to fit the current page text in a natural way. Bear in mind that with personalized search Google could have scanned the searcher’s brain to add different contexts to the search query, so don’t concentrate too much on the keyword phrase alone, but on short sentences containing both the keyword(s), respectively their synonyms, and a sensible context as well.

There is no magic number like “use the keywords 5 times to get a #3 spot” or “7 occurences of a keyword gain you a #1 ranking”. Optimal keyword density is a myth, so just apply common sense by not annoying human readers. One readable sentence containing the keyword(s) might suffice. Also, emphasizing keywords (EM/I, STRONG/B, eye catching colors …) makes sense because it helps catching the attention of scanning visitors, but don’t over-emphasize because that looks crappy. The same goes for H2/H3/… headings. Structure your copy, but don’t write in headlines. When you emphasize a word or phrase in (bold) red, then don’t do that consistently but only in the most important sentence(s) of your page, and better only on the first visible screen of a longer page.

Work in your keyword+context laden sentences, but -again!- do it in a natural way. You’re writing for humans, not for algos which at this point already know what your page is all about and rank it properly. If your fine tuning gains you a better ranking that’s fine, but the goal is catching the attention of searchers reading (in most cases just skimming) your page title and a machine generated snippet on a search result page. Convince the algo to use your inserted sentence(s) in the snippet, not keyword lists from navigation elements or so.

Write a sensible summary of the page’s content, not more than 200-250 characters, and put that into the description meta tag. Do not copy the first paragraph or other text from the page. Write the summary from scratch instead, and mention the targeted keyword(s). The first paragraph on the page can exceed the length of the meta description to deliver an overview of the page’s message, and it should provide the same information, preferably in the first sentence, but don’t make it longish.

Check the TITLE tag in HEAD: when it is truncated on the SERP then shorten it so that the keyword becomes visible, perhaps move the keyword(s) to the beginning, or create a neat page title around the keyword(s). Do title changes very carefully, because the title is an important ranking factor and your changes could result in a ranking drop. Some CMSs change the URL without notice on changes of the title text, and you certainly don’t want to touch the URL at this point.

Make sure that the page title appears on the page too. Putting the TITLE tag’s content (or a slight variation) in a H1 element in BODY cannot hurt. If you for some weird reasons don’t use H-elements, then at least format it prominently (bold, different color but not red, bigger font size …).

If the page performs nice with a couple money terms and just has a crappy CTR for a particular keyword it ranks for, you can just add a link pointing to a (new) page optimized for that keyword(s), with the keyword(s) in the anchor text, preferably embedded in a readable sentence within the content (long enough to fill two lines under the linked title on the SERP), to improve the snippet. Adding a (prominent) link to a related topic should not impact rankings for other keywords too much, but the keywords submitted by searchers should appear in the snippet a short while after the next crawl. In such cases better don’t change the title, at least not now. If the page gained its ranking solely from anchor text of inbound links, putting the search term on the page can give it a nice boost.

Make sure you get an alert when Ms. Googlebot fetches the changed pages, and check out the SERPs and Google’s click stats a few days later. After a while you’ll get a pretty good idea of how Google creates snippets, and which snippets perform best on the SERPs. Repeat until success.

Related posts:
Google Quality Scores for Natural Search Optimization by Chris Silver Smith
Improve SERP-snippets by providing a good meta description tag by Raj Krishnan from Google’s Snippets Team



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Killing Trolls in Google Groups

Are you tired of trolls and dickheads in the Google Groups? Then switch to FireFox, install Greasemonkey and Damian’s Google Groups KillFile. Go to your favorite groups and click “ignore user” to populate your ignore list. Without trolling your Usenet or Google Group will look way friendlier.

You should read (and subscribe to) Damian’s troll-killer post and the comments though, just in case Google changes the layout again or there’s a bugfix. For example when I’ve the troll filter activated, I don’t see threads when a troll posted the last reply.

Remember: The canonical treatment of trolls is ignoring their posts, regardless a particular posts’s insanity or the lack of it. Not even insults, slander or name calling justifies a reply to the troll. If necessary, forward the post to your lawyers, but don’t enter a discussion with a troll because that feeds their ego and encourages them to produce even more crap.

Hat tip to ThoughtOn

Related post: What to do with [troll’s handle]? by JLH



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

The Vanessa Fox Memorial

I was quite shocked when Vanessa told me that she’s leaving Google to join Zillow. That’s a big loss for Google, and a big loss for the Webmaster/SEO community relying on Google. And that’s a great enrichment for Zillow. I’m dead sure they can’t really imagine how lucky they are. And they better treat her very well, or Vanessa’s admirers will launch a firestorm which Rommel, Guderian, et al couldn’t have dreamed of when they’ve invented the blitz. Yep, at first sight that was sad news.

But it’s good news for Vanessa, she’s excited of “an all-new opportunity to work on the unique challenges of the vertical and local search space at Zillow”. I wish her all the best at Zillow and I hope that this challenge will not morph her into an always too tired caffeine junky (again) ;)

Back in 2005/2006 when I interviewed Vanessa on her pet sitemaps, her blogger profile said “technical writer in Kirkland” (from my POV an understatement), now she leaves Google as a prominent product manager, well known and loved by colleagues, SEOs and Webmasters around the globe. She created the Vanessa Fox Memorial aka “Google Webmaster Central” and handed her baby over to a great team she gathered and trained to make sure that Google’s opening to Webmasters evolves further. Regardless her unclimbable mount email Vanessa was always there to help, fix and clarify things, and open to suggestions even on minor details. She’s a gem, an admirable geek, a tough and lovably ideal of a Googler, and now a Zillower. Again, all the best, keep in touch, and

Thank You Vanessa!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google enhances the quality guidelines

Maybe todays update of Google’s quality guidelines is the first phase of the Webmaster help system revamp project. I know there’s more to come, Google has great plans for the help center. So don’t miss out on the opportunity to tell Google’s Webmaster Central team what you’d like to have added or changed. Only 14 replies to this call for input is an evidence of incapacity, shame on the Webmasters community.

I haven’t had the time to write a full-blown review of the updates, so here are just a few remarks from a Webmaster’s perspective. Scroll down to Quality guidelines - specific guidelines to view the updates, that means click the links to the new (sometimes overlapping) detail pages.

As always, the guidelines outline best practices of Web development, refer to common sense, and don’t encourage over-interpretations (not that those are avoidable, nor utterly useless). Now providing Webmasters with more explanatory directives, detailed definitions and even examples in the “Don’ts” section is very much appreciated. Look at the over five years old first version of this document before you bitch ;)

Avoid hidden text or hidden links
The new help page on hidden text and links is descriptive and comes with examples, well done. What I miss is a hint with regard to CSS menus and other content which is hidden until the user performs a particular action. Google states “Text (such as excessive keywords) can be hidden in several ways, including […] Using CSS to hide text”. The same goes for links by the way. I wish they would add something in the lines of “… Using CSS to hide text in a way that a user can’t visualize it by a common action like moving the mouse over a pointer to a hidden element, or clicking a text link or descriptive widget or icon”. The hint at the bottom “If you do find hidden text or links on your site, either remove them or, if they are relevant for your site’s visitors, make them easily viewable” comes close to this but lacks an example.

Susan Moskwa from Google clarifies what one can hide with CSS, and what sorts of CSS hidden stuff is considered a violation of the guidelines, in the Google forum on June/11/2007:

If your intent in hiding text is to deceive the search engines, we frown on that; if your intent is purely to improve the visual user experience (e.g. by replacing some text with a fancier image of that same text), you don’t need to worry. Of course, as with many techniques, there are shades of gray between “this is clearly deceptive and wrong” and “this is perfectly acceptable”. Matt [Cutts] did say that hiding text moves you a step further towards the gray area. But if you’re running a perfectly legitimate site, you don’t need to worry about it. If, on the other hand, your site already exhibits a bunch of other semi-shady techniques, hidden text starts to look like one more item on that list. […] As the Guidelines say, focus on intent. If you’re using CSS techniques purely to improve your users’ experience and/or accessibility, you shouldn’t need to worry. One good way to keep it on the up-and-up (if you’re replacing text w/ images) is to make sure the text you’re hiding is being replaced by an image with the exact same text.

Don’t use cloaking or sneaky redirects
This sentence in bold red blinking uppercase letters should be pinned 5 pixels below the heading: “When examining […] your site to ensure your site adheres to our guidelines, consider the intent” (emphasis mine). There are so many perfectly legit ways to do the content presentation, that it is impossible to assign particular techniques to good versus bad intent, nor vice versa.

I think this page leads to misinterpretations. The major point of confusion is, that Google argues completely from a search engine’s perspective and dosn’t write for the targeted audience, that is Webmasters and Web developers. Instead of all the talk about users vs. search engines, it should distinguish plain user agents (crawlers, text browsers, JavaScript disabled …) from enhanced user agents (JS/AJAX enabled, installed and activated plug-ins …). Don’t get me wrong, this page gives the right advice, but the good advice is somewhat obfuscated in phrases like “Rather, you should consider visitors to your site who are unable to view these elements as well”.

For example “Serving a page of HTML text to search engines, while showing a page of images or Flash to users [is considered deceptive cloaking]” puts down a gazillion of legit sites which serve the same contents in different formats (and often under different URLs) depending on the ability of the current user agent to render particular stuff like Flash, and a bazillion of perfectly legit AJAX driven sites which provide crawlers and text browsers with a somewhat static structure of HTML pages, too.

“Serving different content to search engines than to users [is considered deceptive cloaking]” puts it better, because in reverse that reads “Feel free to serve identical contents under different URLs and in different formats to users and search engines. Just make sure that you accurately detect the capabilities of the user agent before you decide to alter a requested plain HTML page into a fancy conglomerate of flashing widgets with sound and other good vibrations, respectively vice versa”.

Don’t send automated queries to Google
This page doesn’t provide much more information than the paragraph on the main page, but there’s not that much to explain: don’t use WebPosition Gold™. Period.

Don’t load pages with irrelevant keywords
Tells why keyword stuffing is not a bright idea, nothing to note.

Don’t create multiple pages, subdomains, or domains with substantially duplicate content
This detail page is a must read. It starts with a to the point definition “Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar”, followed by a ton of good tips and valuable information. And fortunately it expresses that there’s no such thing as a general duplicate content penalty.

Don’t create pages that install viruses, trojans, or other badware
Describes Google’s service in partnership with StopBADware.org, highlighting the quickest procedure to get Google’s malware warning removed.

Avoid “doorway” pages created just for search engines, or other “cookie cutter” approaches such as affiliate programs with little or no original content
The info on doorway pages is just a paragraph on the “cloaking and sneaky redirect” page. I miss a few tips on how one can identify unintentional doorway pages created by just bad design, without any deceptive intent. Also, I think a few sentences on thin SERP-like pages would be helpful in this context.

“Little or no original content” targets thin affiliate sites, again doorway pages, auto-generated content, and scraped content. It becomes clear that Google does not love MFA sites.

If your site participates in an affiliate program, make sure that your site adds value. Provide unique and relevant content that gives users a reason to visit your site first
The link points to the “Little or no original content” page mentioned above.


“Buying links in order to improve a site’s ranking is in violation of Google’s webmaster guidelines and can negatively impact a site’s ranking in search results. […] Google works hard to ensure that it fully discounts links intended to manipulate search engine results, such link exchanges and purchased links.”

Basically that means: if you purchase a link, then make dead sure it’s castrated or Google will take away the ability to pass link love from the page (or even site) linking out for green. Or don’t get caught respectively denunciated by competitors (I doubt that’s a surefire tactic for the average Webmaster).

Note that in the second sentence quoted above Google states officially that link exchanges for the sole purpose of manipulating search engines are a waste of time and resources. That means reciprocal links of particular types nullify each other, and site links might have lost their power too. <speculation>Google may find it funny to increase the toolbar PageRank of pages involved in all sorts of link swap campaigns, but the real PageRank will remain untouched.</speculation>

There’s much confusion with regard to “paid link penalties”. To the best of my knowledge the link’s destination will not be penalized, but the paid link(s) will not (or no longer) increase its reputation, so that in case the link’s intention got reported or discovered ex-post its rankings may suffer. Penalizing the link buyer would not make much sense, and Googlers are known as pragmatic folks, hence I doubt there is such a penalty. <speculation>Possibly Google has a flag applied to known link purchasers (sites as well as webmasters), which –if it exists– might result in more scrupulous judgements of other optimization techniques.</speculation>

What I really like is that the Googlers in charge honestly tried to write for their audience, that is Webmasters and Web developers, not (only) search geeks. Hence the news is that Google really cares. Since the revamp is a funded project, I guess the few paragraphs where the guidelines are still mysterious (for the great unwashed), or even potentially misleading, will get an update soon. I can’t wait for the next phase of this project.

Vanessa Fox creates buzz at SMX today, so I’ll update this post when (if?) she blogs about the updates later on (update: Vanessa’s post). Perhaps Matt Cutts will comment the updated quality guidelines at the SMX conference today, look for Barry’s writeup at Search Engine Land, and SEO Roundtable as well as the Bruce Clay blog for coverage of the SMX Penalty Box Summit. Marketing Pilgrim covered this session too. This post at Search Engine Journal provides related info, and more quotes from Matt. Just one SMX tidbit: according to Matt they’re going to change the name of the re-inclusion request to something like a reconsideration request.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google nofollow’s itself

Awesome. Nofollow-insane at its best. Check the source of Google’s Webmaster Blog. In HEAD you’ll find an insane meta tag:
<meta name=”ROBOTS” content=”NOINDEX,NOFOLLOW” />

Well, that’s one of many examples. Read the support forums. Another case of Google nofollow’ing herself: Google fun

Matt thought that all teams understood the syntax and semantics of rel-nofollow. It seems to me that’s not the case. I really can’t blame Googlers applying rel-nofollow or even nofollow/noindex meta tags to everything they get a hand on. It is not understandable. It’s not useable. It’s misleading. It’s confusing. It should get buried asap.

Hat tip to John (JLH’s post).

Update 1: A friendly Googler just told me that a Blogger glitch (pertaining only Google blogs) inserted the crawler-unfriendly meta element, it should be solved soon. I thought this bug was fixed months ago ... if page.isPrivate == true by mistake then insert “<meta content=’NOINDEX,NOFOLLOW’ name=’ROBOTS’ />” … (made up)

Update 2: The ‘noindex,nofollow’ robots meta tag is gone now, and the Webmaster Central Blog got a neat new logo:
Google Webmaster Central Blog - Offic'ial news on crawling and indexing sites for the Google index (I’d add ALT and TITLE text: alt="Google Webmaster Central Blog - Official news on crawling and indexing sites for the Google index" title="Official news on crawling and indexing sites for the Google index")



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Help Google revealing the secret sauce!

Do you remember this Do’s & Don’ts page?

Google Information for Webmasters
Webmaster Dos and Don’ts
Do:

  • Create a site with content and design that are straightforward, appropriate and relevant for visitors to your site.
  • Feel free to exchange links with other sites that are compatible with your site’s content and users’ interests.
  • Be very careful about allowing an individual consultant or company to ‘optimize’ your web site. Chances are they will engage in some of our "Don’ts" and end up hurting your site.
  • Consider submitting your sites to our partner directories Yahoo! and DMOZ.

Don’t:

  • Cloak.
  • Write text or create links that can be seen by search engines but not by visitors to your site.
  • Participate in link exchanges for the sole purpose of increasing your ranking in search engines.
  • Send automated queries to Google in an attempt to monitor your site’s ranking.
  • Use programs that generate lots of generic doorway pages.

http://www.google.com/webmasters/dos.html five years ago (restored)

That’s Google’s Webmaster guidelines as per 2002, when the Webmaster’s section covered all topics on a dozen or so pages. In the meantime it was translated into many languages, and grew considerably. Todays Webmaster Help Center is an authoritative resource for experienced search geeks able to gather the tidbits various Googlers spread on the Web too.

That’s going to change. Ríona MacNamara from Google’s Webmaster Central team in Kirkland asks for ideas on How to revamp Google’s Webmaster Help Center:

We’re planning to restructure the Webmaster Tools Help Center to improve the way we organize and present help content. We want to make sure that our content is technically accurate, relevant, and up to date, and that it’s easy to navigate and find exactly what you’re looking for. Is the content broad enough in scope? Deep enough in detail? Does it have the right mix of instructional and conceptual info? […] Is the Help Center — well, helpful?

I hope that Google is willing to evolve the Webmasters Help Center to become a useful resource for spare time Webmasters, site owners, publishers, bloggers and other non-geeks, along with in-depth information addressing search geeks. Assuming in its current shape it’s meant to help out non-search-geeks, I must state that it hosts some of the worst FAQ items ever. The contents are certainly helpful if the reader has a great deal of Google specific knowledge, experience in reading Google-ish text, and knows what to read with a grain of salt because Google cannot tell the way the cookie crumbles to protect their secret sauce. Well, instead of reading rants, or bitching yourself, why not add your 0.02?

Click here to tell Google what you want and expect.

Please don’t get fooled by “Tools” in the thread title. The tools are nicely explained, what we want is the secret sauce dumped into the general help system ;)



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

German spammers banning all domains out there

If you receive an email in german language from Google’s Search Quality team (donotreply@gmail.com) telling your site was banned by Google for 30 days please don’t worry. That’s faked. Legit (similar phrased) emails must come from a google.com email address. If the hoax-email comes with an attachment, don’t save or open the attached file (zipped google_webmastertools.exe)!

Here is the email:
Entfernung Ihrer Webseite [domain] aus dem Google Index
The email looks pretty authentic, its style and wording are somewhat Google-ish. I speak German, hence I’m sure that gazillions of innocent Webmasters and site owners buy it and panic. Unfortunately most filters let the zipped attachment (google_webmastertools.exe) pass thru. I didn’t open it myself and I bet it’s not a bright idea to try it.

Google told me that Stefanie from the real Search Quality team over in Dublin will soon post a warning on the german blog.

Here is an original penalty warning in german language:
Entfernung Ihrer Webseite aus dem Google IndexThese emails are sent from donotreply@google.com without attachments.

Update 05/10/2007: Here is Google’s official statement (in german language) and the english version by Vanessa. The attached .exe is a joke, it executes cmd.exe c\:clear complete harddisc (Hoax.BAT.Small.a).

Update 05/11/2007: Because these emails are easy to mistake for authentic ones from the Search Quality team, Google temporarily discontinued sending them as they work on ways to provide more secure communication mechanisms. This update reads as if Google has stopped to send out penalty notification emails in all languages: “… as we’ve temporarily stopped sending emails about guidelines violations, you can safely assume that any email you receive isn’t from us. Note that we do provide information about some violations in webmaster tools.”.

Update 06/19/2007: German forums and blogs report another flood of these faked emails, and this post got tons of visits from searches for quotes from the email quoted above. Calm down, don’t panic: Google still doesn’t send out penalty notifications via email (in Deutsch). So please ignore the spam and refer to the diagnostics tab in your Webmaster Central account when you assume a penalty.

Update 07/18/2007: Google released the message center where site owners can poll for penalty notifications. They are still working on a safe solution for emails. Probably ‘-950/-30/-n penalties’ won’t get announced any time soon.



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

More anchor text analysis from Webmaster Central

If you didn’t spot my update posted a few hours ago, log in to Webmaster Central and view your anchor text stats. Find way more phrases and play with the variations, these should allow you to track down sources by quoted search queries. Also, the word-stats are back.
Have fun!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Ultimately: Watch out for Google’s URL terminator

I hate it to recycle news, but I just fell in love with this neat URL terminator. Unfortunately there’s no button to remove SPAM, so I still have to outrank competitors, but besides that ‘flaw’ it’s a perfect and user friendly tool covering all my needs. Thanks!



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3  Next Page »