Archived posts from the 'Google' Category

Google Sitemaps

I plan to launch the Google Sitemaps Knowledge Base (BETA) soon. So if anybody has a neat Sitemaps related question which is of common interest, or an educational article I can publish exclusively, please submit it in the comments or drop me a message. Thanks in advance.

By the way, I’ve created a new RSS feed consolidating all Google Sitemaps stuff from the tutorial, FAQ, knowledge base (BETA), XML validation and whatever.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

If your Web site was banned by Google

If your Web site was banned by Google for reasons like hidden text, invisible links, client-sided instant redirects, doorway pages etc., chances are the ban is limited to 30 days or a few months only. When you search for your domain name and you get a result page stating “Google knows zilch about that shady site”, and you previously had some listings on Google’s SERPs, then:

Save all your server logs and extract each and every request by a Googlebot.

Shortly after banning a site Google usually will drastically reduce its crawling frequency. That is Googlebot starts to check for suspected stuff, and no longer crawls for indexing purposes.

Look at every page requested by Googlebot. Double-check it for hidden stuff and artificial linkage. Fix the on-page mistakes (polite description for over-optimization). Delete the page if it is part of a thin-page series (high amounts of pages carrying low amounts of repetitive but keyword optimized textual content, a.k.a. “doorway pages”). Delete all (thin) pages which do a client-sided redirect to the homepage or a profitable landing page. “Deletion” means physical removal, not redirection to a clean page. If your doorway pages don’t respond with a honest 404 when Googlebot revisits them, the ban will not be lifted. Consider canned site-search results, thin product pages with full navigation (e.g. only SKU, name and image), and stuff like that shady too. If you think those pages are helpful for visitors though, then make sure SE crawlers cannot fetch or even index it.

Hire a professional SEO for a last check and a second opinion as well. Removing questionable stuff is a good opportunity to implement effective optimization.

As soon as the crawling frequency goes back to the old cadence, and you’re sure your site is clean, file a reinclusion request. Write up honestly what you did to cheat Google, explain how you’ve fixed your stuff, and why it can’t happen again.

Keep in mind that there is no such thing as a second successful reinclusion request. That means if you cheat again, even unintentionally, your site is toast.

If your site was suspended for 30 days or so, it can reappear on the SERPs even without a reinclusion request. However, filing a reinclusion request should not hurt, and doing it before an estimated algorithmic reinstatement can speed up the process, if the initial penalty was a hand job, which seems to require a human review to lift the ban.

Best of luck!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

How to escape Google’s ‘Sandbox’

Matt Cutt’s recent Pubcon talk on the infamous ‘Sandbox’ not only cleared the myth. The following discussion at Threadwatch, Webmasterworld (paid) and many other hangouts revealed some gems, summed up by Andy Hagans: it’s all about trust.

The ‘Sandbox’ is not an automated aging delay, it’s not a penalty for optimized sites, it’s not an inbound link counter over the time axis, just to name a few of the theories floating around. The ‘Sandbox’ is simply the probation period Google needs to gather TrustRank™ and to evaluate its quality guidelines.

To escape the ‘Sandbox’ a new Web site needs trusted authority links, amplified and distributed by clever internal linkage, and a critical mass of trustworthy, unique, and original content. Enhancing useability and crawler friendliness helps too. IOW, back to the roots.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google’s New Site Stats: more than a sitemaps byproduct

Google’s new sitemap stats come with useful stuff, full coverage of the updates here. You get crawl stats and detailed error reports, even popularity statistics like the top 5 search queries directing traffic to your site, real PageRank distribution, and more.

The most important thing in my opinion is, that Google has created a toolset for Webmasters by listening to the Webmaster’s needs. Most of the pretty neat new stats are answers to questions and requests from Webmasters, collected from direct feedback and the user groups. I still have wishes, but I like the prototyping approach, so I can’t wait for the next version of knowledge is power.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

An Inofficial FAQ on Google Sitemaps

Yesterday I’ve launched the inofficial Google Sitemaps FAQ. It’s not yet complete, but it may be helpful if you’ve questions like

I’ve tried to answer questions which Google cannot or will not answer, or where Google would get murdered for any answer other than “42″. Even if the answer is a plain “no”, I’ve provided backgrounds, and solutions. I’m not a content thief, and I hate useless redundancies, so don’t miss out on the official FAQ and the sitemaps blog.

Enjoy, and submit interesting questions here. Your feedback is very much appreciated!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Reciprocal links are not penalized by Google

Recently, reciprocal linking at all is accused to tank a Web sites’ placement in Google’s search results. Despite the fact that it’s way too early for a serious post-Jagger-analysis, the current hype on oh sooo bad reciprocal links is a myth IMHO.

What Google is after are artificial link schemes, that includes massive reciprocal linkage appearing simultaneously. That’s not a new thing. What Google still honors, is content driven, natural, on-topic reciprocal linkage.

Simplified, Google has a huge database of the Web’s linkage data, where each and every link has a timestamp, plus an ID of source and destination page, and site. A pretty simple query reveals a reciprocal link campaign and other systematic link patterns as well. Again, that’s not new. The Jagger update may have tanked more sites involved in artificial linkage because Google has assigned more resources to link analysis, but that does not mean that Google dislikes reciprocal linking at all.

Outgoing links to related pages do attract natural reciprocal links over time, even without an agreement. Those links still count as legitimate votes. Don’t push the panic button, think!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

I want more Jaggers!

Jagger-1 was good to me, and what I’ve seen from Jagger-2 pleases me even more. I can’t wait for Jagger-3! Dear folks at Google, please continue the Jagger series and roll out a new Jagger weekly, I’d love to see my Google traffic doubling every week! In return I’ll double the time I’m spending on Google user support in your groups. Thanks in advance!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

New Google Dupe Filters?

Folks at WebmasterWorld, ThreadWatch and other hang-outs discuss a new duplicate content filter from Google. This odd thing seems to wipe out the SERPs, producing way more collateral damage than any other filter known to SEOs.

From what I’ve read, all threads contentrate on on-page and on-site factors trying to find a way out of Google’s trash can. I admit that on-page/site factors like near-duplicates produced with copy, paste and modify operations or excessive quoting can trigger duplicate content filters. But I don’t buy that’s the whole story.

If a fair amount of the vanished sites mentioned in the discussions are rather large, those sites probably are dedicated to popular themes. Popular themes are subject of many Web sites. The amount of unique information on popular topics isn’t infinite. That is, many Web sites provide the same piece of information. The wording may be different, but there are only so many ways to rewrite a press release. The core information is identical, making many pages considered near-duplicates, and inserting longer quotes even duplicates text snippets or blocks.

Semantic block analysis of Web pages is not a new thing. What if Google just bought a few clusters of new machines, now applying well known filters on a broader set of data? This would perfectly explain why a year ago four very similar pages all ranked fine, then three of four disappeard, and since yesterday all four are gone, because the page having the source bonus resides on a foreign Web site. To come to this conclusion, just expand the scope of the problem analysis to the whole Web. This makes sense, since Google says “Google’s mission is to organize the world’s information”.

Read more here: Thoughts on new Duplicate Content Issues with Google.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google’s Master Plan

Alerted by Nick I had the chance to take a look at Google’s current master plan. Niall Kennedy “took some shots of the Google master plan. There is a long set of whiteboards next to the entrance to one of the Google buildings. The master plan is like a wiki: there is an eraser and a set of pens at the end of the board for people to edit and contribute to the writing on the wall.”

Interesting to see that “Directory” is not yet checked. Does this indicate that Google has plans to build its own? Unchecked items like “Diet”, “Mortgages” and “Real Estate” make me wonder what kind of services for those traditional spammy areas Google hides in its pipeline. The red dots or quotes cramping “Dating” may indicate that maps, talk, mail and search get a new consolidated user interface soon. The master plan also reveals that they’ve hired Vint Cerf to develop a world wide dark fiber/WiFi next generation web based on a redesigned TCP/IP and HTTP protocol.

Is all that beyond belief? Perhaps, perhaps not, but food for thoughts at any rate, if the shots are for real and not part of a funny disinformation campaign. Go study the plan and speculate yourself.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google’s Blog Search Released

Spotted by SEW and TW, Google is the first major search engine providing a real feed and blog search service.

Google’s new feed search service covers all kinds of XML feeds, not only blogs, but usually no news feeds. So what can you do to get your non-blog and non-news feeds included? As discussed here, you need to ping services like pingomatic, since Google doesn’t offer a ping service.

‘Nuff said, I’m off to play with the new toy. Lets see whether I can feed it with a nice amount of neat stuff I’ve in the works waiting for the launch:)

[Update: This post appeared 14 minutes after uploading in Google’s blog search results - awesome!]

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13  Next Page »