How to escape Google’s ‘Sandbox’

Matt Cutt’s recent Pubcon talk on the infamous ‘Sandbox’ not only cleared the myth. The following discussion at Threadwatch, Webmasterworld (paid) and many other hangouts revealed some gems, summed up by Andy Hagans: it’s all about trust.

The ‘Sandbox’ is not an automated aging delay, it’s not a penalty for optimized sites, it’s not an inbound link counter over the time axis, just to name a few of the theories floating around. The ‘Sandbox’ is simply the probation period Google needs to gather TrustRank™ and to evaluate its quality guidelines.

To escape the ‘Sandbox’ a new Web site needs trusted authority links, amplified and distributed by clever internal linkage, and a critical mass of trustworthy, unique, and original content. Enhancing useability and crawler friendliness helps too. IOW, back to the roots.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google’s New Site Stats: more than a sitemaps byproduct

Google’s new sitemap stats come with useful stuff, full coverage of the updates here. You get crawl stats and detailed error reports, even popularity statistics like the top 5 search queries directing traffic to your site, real PageRank distribution, and more.

The most important thing in my opinion is, that Google has created a toolset for Webmasters by listening to the Webmaster’s needs. Most of the pretty neat new stats are answers to questions and requests from Webmasters, collected from direct feedback and the user groups. I still have wishes, but I like the prototyping approach, so I can’t wait for the next version of knowledge is power.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

An Inofficial FAQ on Google Sitemaps

Yesterday I’ve launched the inofficial Google Sitemaps FAQ. It’s not yet complete, but it may be helpful if you’ve questions like

I’ve tried to answer questions which Google cannot or will not answer, or where Google would get murdered for any answer other than “42″. Even if the answer is a plain “no”, I’ve provided backgrounds, and solutions. I’m not a content thief, and I hate useless redundancies, so don’t miss out on the official FAQ and the sitemaps blog.

Enjoy, and submit interesting questions here. Your feedback is very much appreciated!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Reciprocal links are not penalized by Google

Recently, reciprocal linking at all is accused to tank a Web sites’ placement in Google’s search results. Despite the fact that it’s way too early for a serious post-Jagger-analysis, the current hype on oh sooo bad reciprocal links is a myth IMHO.

What Google is after are artificial link schemes, that includes massive reciprocal linkage appearing simultaneously. That’s not a new thing. What Google still honors, is content driven, natural, on-topic reciprocal linkage.

Simplified, Google has a huge database of the Web’s linkage data, where each and every link has a timestamp, plus an ID of source and destination page, and site. A pretty simple query reveals a reciprocal link campaign and other systematic link patterns as well. Again, that’s not new. The Jagger update may have tanked more sites involved in artificial linkage because Google has assigned more resources to link analysis, but that does not mean that Google dislikes reciprocal linking at all.

Outgoing links to related pages do attract natural reciprocal links over time, even without an agreement. Those links still count as legitimate votes. Don’t push the panic button, think!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google Sitemap FAQ - Call for input

I’m working on an inofficial Google Sitemaps FAQ trying to answer questions which Google cannot or will not cover in the official FAQ. So far I have longish articles on these topics:

Can I remove deleted pages in Google’s index via XML Sitemap?
Will a Google Sitemap increase my PageRank?
Can I escape the ’sandbox effect’ with a Google Sitemap?
How long does it take go get indexed by Google?

I’ve some more ideas in petto, but before I continue I’d like to collect a few interesting sitemap related questions. I’m swamped in my backlog, so probably I’ve to pause this project for at least a week or two. If you know of a popular sitemap related question, or if you’d like to contribute, please submit your ideas here. Thank you in advance!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Google AdSense Login Problems

If your AdSense login page loops after the policy update, you need to change the URI in the address bar of your browser.

When you’ve entered your login info, a red “Loading…” message appears for a few seconds, then the login box is offered again. The URI has changed to …/login3. If you change the URI to …/login1 you get an old and not yet deleted version of the login page, which works.

Tags:



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

I want more Jaggers!

Jagger-1 was good to me, and what I’ve seen from Jagger-2 pleases me even more. I can’t wait for Jagger-3! Dear folks at Google, please continue the Jagger series and roll out a new Jagger weekly, I’d love to see my Google traffic doubling every week! In return I’ll double the time I’m spending on Google user support in your groups. Thanks in advance!

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Smart Web Site Architects Provide Meaningful URLs

From a typical forum thread on user/search engine friendly Web site design:

Question: Should I provide meaningful URLs carrying keywords and navigational information?

Answer 1: Absolutely, if your information architecture and its technical implementation allow the use of keyword rich hyphened URLs.

Answer 2: Bear in mind that URLs are unchangeable, thus first consider to develop a suitable information architecture and a flexible Web site structure. You’ll learn that folders and URLs are the last thing to think of.

Question: WTF do you mean?

Answer: Here you go, it makes no sense to paint a house before the architect has finished the blueprints.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

MySQL’s ODBC Driver 3.51 drives me nuts

ODBC drivers can drive me crazy. Especially when the last thing I’m looking at is the ODBC driver, because I thought this darn thing is a well developed and tested open source piece.

A site I’m working on collects log data in a MySQL table, counting page views per landing page, referrer page and SE search terms. The stats are nice, but pretty much useless, because with such a structure it’s hard to create summaries and keyword analyses.

Luckily Progress OpenEdge was available, so I thought it should be possible to read the MySQL table via ODBC from the Web server, creating all reports with Progress, which has a great temp-table support, amazing fast word indexing, and can handle billions of large records with ease.

Well, I’ve downloaded, installed and configured the MySQL ODBC Driver 3.51, and did a successful connection test. So far so nice, but now the nightmare began. With Progress I couldn’t create the ODBC data server instance, and as always the error messages were misleading.

To make a long story short, the current MySQL ODBC driver lacks so much functionality that it cannot work. The answer is buried in the PEG mailing list archive which is not fully indexed by Google. Gus Bjorklund from Progress Software states “The ODBC dataserver will not work due to a variety of functions not implemented in the MySQL ODBC driver … As people who have tried it have discovered, MySQL does not yet have a complete enough implementation of the SQL DML”.

Frustrating. Back to the stone age. Oups. Transferring table dumps failed due to the large amount of data. Aaahhhrrrggg. Developing a Web service in PHP sending selected data in handy batches makes my day.

However, does anybody has (heard of) a working ODBC driver for MySQL?

Tags:



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

Duplicate Content Filters are Sensitive Plants

In their ever lasting war on link and index spam search engines produce way too much collateral damage. Especially hierarchically structured content suffers from over-sensitive spam filters. The crux at this is, that user friendly pages need to duplicate information from upper levels. The old rule “what’s good for users will be honored by the engines” no longer applies.

In fact the problem is not the legitimate duplication of key information from other pages, the problem is that duplicate content filters are sensitive plants not able to distinguish useful repetition from automated generation of artificial spider fodder. The engines won’t lower their spam threshold, that means they will not fix this persistent bug in the near future, so Web site owners have to live with decreasing search engine traffic, or react. The question is, what can a Webmaster do to escape the dilemma without converting the site to a useless nightmare for visitors, because all textual redundancies were eliminated?

The major fault of Google’s newer dupe filters is, that their block level analysis often fails in categorizing page areas. Web page elements in and near the body area, which contain duplicated key information from upper levels, are treated as content blocks, not as part of the page template where they logically belong to. As long as those text blocks reside in separated HTML block level elements, it should be quite easy to rearrange those elements in a way that the duplicated text becomes part of the page template, what should be safe at least with somewhat intelligent dupe filters.

Unfortunately, very often the raw data aren’t normalized, for example the text duplication happens within a description field in a database’s products table. That’s a major design flaw, and it must be corrected in order to manipulate block level elements properly, that is to declare them as part of the template vs. part of the page body.

My article Feed Duplicate Content Filters Properly explains a method to revamp page templates of eCommerce sites on the block level. The principle outlined there can be applied to other hierarchical content structures too.

Tags: ()



Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments
 

« Previous Page  1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28  Next Page »