How do Majestic and LinkScape get their raw data?

LinkScape data acquisitionDoes your built-in bullshit detector cry in agony when you read announcements of link analysis tools claiming to have crawled Web pages in the trillions? Can a tiny SEO shop, or a remote search engine in its early stages running on donated equipment, build an index of that size? It took Google a decade to reach these figures, and Google’s webspam team alone outnumbers the staff of SEOmoz and Majestic, not to speak of infrastructure.

Well, it’s not as shady as you might think, although there’s some serious bragging and willy whacking involved.

First of all, both SEOmoz and Majestic do not own an indexed copy of the Web. They process markup just to extract hyperlinks. That means they parse Web resources, mostly HTML pages, to store linkage data. Once each link and its attributes (HREF and REL values, anchor text, …) are stored under a Web page’s URI, the markup gets discarded. That’s why you can’t search these indexes for keywords. There’s no full text index necessary to compute link graphs.

Majestic index sizeThe storage requirements for the Web’s link graph are way smaller than for a full text index that major search engines have to handle. In other words, it’s plausible.

Majestic clearly describes this process, and openly tells that they index only links.

With SEOmoz that’s a completely different story. They obfuscate information about the technology behind LinkScape to a level that could be described as near-snake-oil. Of course one could argue that they might be totally clueless, but I don’t buy that. You can’t create a tool like LinkScape being a moron with an IQ slighly below an amoeba. As a matter of fact, I do know that LinkScape was developed by extremely bright folks, so we’re dealing with a misleading sales pitch:

Linkscape index size

Let’s throw in a comment at Sphinn, where a SEOmoz rep posted “Our bots, our crawl, our index“.

Of course that’s utter bullshit. SEOmoz does not have the resources to accomplish such a task. In other words, if –and that’s a big IF– they do work as described above, they’re operating something extremely sneaky that breaks Web standards and my understanding of fairness and honesty. Actually, that’s not so, but because it is not so, LinkScape and OpenSiteExplorer in its current shape must die (see below why).

They do insult your intelligence as well as mine, and that’s obviously not the right thing to do, but I assume they do it solely for marketing purposes. Not that they need to cover up their operation with a smokescreen like that. LinkScape could succeed with all facts on the table. I’d call it a neat SEO tool, if it just would be legit.

So what’s wrong with SEOmoz’s statements above, and LinkScape at all?

Let’s start with “Crawled in the past 45 days: 700 billion links, 55 billion URLs, 63 million root domains”. That translates to “crawled … 55 billion Web pages, including 63 million root index pages, carrying 700 billion links”. 13 links per page is plausible. Crawling 55 billion URIs requires sending out HTTP GET requests to fetch 55 billion Web resources within 45 days, that’s roughly 30 terabyte per day. Plausible? Perhaps.

True? Not as is. Making up numbers like “crawled 700 billion links” suggests a comprehensive index of 700 billion URIs. I highly doubt that SEOmoz did ‘crawl’ 700 billion URIs.

When SEOmoz would really crawl the Web, they’d have to respect Web standards like the Robots Exclusion Protocol (REP). You would find their crawler in your logs. An organization crawling the Web must

  • do that with a user agent that identifies itself as crawler, for example “Mozilla/5.0 (compatible; Seomozbot/1.0; +”,
  • fetch robots.txt at least daily,
  • provide a method to block their crawler with robots.txt,
  • respect indexer directives like “noindex” or “nofollow” both in META elements as well as in HTTP response headers.

SEOmoz obeys only <META NAME="SEOMOZ" CONTENT="NOINDEX" />, according to their sources page. And exactly this page reveals that they purchase their data from various services, including search engines. They do not crawl a single Web page.

Savvy SEOs should know that crawling, parsing, and indexing are different processes. Why does SEOmoz insist on the term “crawling”, taking all the flak they can get, when they obviously don’t crawl anything?

Two claims out of three in “Our bots, our crawl, our index” are blatant lies. If SEOmoz performs any crawling, in addition to processing bought data, without following and communicating the procedure outlined above, that would be sneaky. I really hope that’s not happening.

As a matter of fact, I’d like to see SEOmoz crawling. I’d be very, very happy if they would not purchase a single byte of 3rd party crawler results. Why? Because I could block them in robots.txt. If they don’t access my content, I don’t have to worry whether they obey my indexer directives (robots meta ‘tag’) or not.

As a side note, requiring a “SEOMOZ” robots META element to opt out of their link analysis is plain theft. Adding such code bloat to my pages takes a lot of time, and that’s expensive. Also, serving an additional line of code in each and every HEAD section sums up to a lot of wasted bandwidth –$$!– over time. Am I supposed to invest my hard earned bucks just to prevent me from revealing my outgoing links to my competitors? For that reason alone I should report SEOmoz to the FTC requesting them to shut LinkScape down asap.

They don’t obey the X-Robots-Tag (”noindex”/”nofollow”/… in the HTTP header) for a reason. Working with purchased data from various sources they can’t guarantee that they even get those headers. Also, why the fuck should I serve MSNbot, Slurp or Googlebot an HTTP header addressing SEOmoz? This could put my search engine visibility at risk.

If they’d crawl themselves, serving their user agent a “noindex” X-Robots-Tag and a 403 might be doable, at least when they pay for my efforts. With their current setup that’s technically impossible. They could switch to completely, that’ll solve the problem, provided 80legs works 100% by the REP and crawls as “SEOmozBot” or so.

With MajesticSEO that’s not an issue, because I can block their crawler with
User-agent: MJ12bot
Disallow: /

Yahoo’s site explorer also delivers too much data. I can’t block it without losing search engine traffic. Since it will probably die when Microsoft overtakes, I don’t rant much about it. Google and Bing don’t reveal my linkage data to everyone.

I have an issue with SEOmoz’s LinkScape, and OpenSiteExplorer as well. It’s serious enough that I say they have to close it, if they’re not willing to change their architecture. And that has nothing to do with misleading sales pitches, or arrogant behavior, or sympathy (respectively, a possibly lack of sympathy).

The competitive link analysis OpenSiteExplorer/LinkScape provides, without giving me a real chance to opt out, puts my business at risk. As much as I appreciate an opportunity to analyze my competitors, vice versa it’s downright evil. Hence just kill it.

Is my take too extreme? Please enlighten me in the comments.

Update: A follow-up post from Michael VanDeMar and its Sphinn discussion, the first LinkScape thread at Sphinn, and Sphinn comments to this pamphlet.

Share/bookmark this: del.icio.usGooglema.gnoliaMixxNetscaperedditSphinnSquidooStumbleUponYahoo MyWeb
Subscribe to      Entries Entries      Comments Comments      All Comments All Comments

56 Comments to "How do Majestic and LinkScape get their raw data?"

  1. Jonas on 21 January, 2010  #link

    You are sooooo right! It was time somebody spoke this out. Damn, I wish I had more time…
    Thanks for this great post!

  2. Domenick on 21 January, 2010  #link

    Some of this shit was above my head, most of it I get where your coming from. “I should report SEOmoz to the FTC requesting them to shut LinkScape down asap.” That had me LMAO and crying cause I just pictured you saying it, priceless.

    I’m glad I came across you and SEOmofo, because you two go against the grain, but it’s not just to be different, good info as well and I get some laughs in the process. Keep going hard!

  3. Alan Bleiweiss on 21 January, 2010  #link

    The tragedy here is that most of the people in this industry, out of a reasonable respect for Rand, put SEOMOZ’s tools on a pedestal right along with Rand, where neither belong.

    They also think all the other craptastic “competitive analysis” tools are worthy of that pedestal as well. And the fact is, or all the time they spend using such tools, they don’t give a crap about the very issues you bring up, let alone have the willingness to consider the facts from a detached and objective perspective.

    Since the word went out this week about the “limited time” offer for LinkScape, I’ve already had clients asking me whether it was worth the investment, and in every situation I tell them to run away as fast as they can.

  4. Hobo on 21 January, 2010  #link

    MOST of that shit was above my head but the rest of it was ace lol

  5. Melissa - SEO Aware on 21 January, 2010  #link

    Wow! and Ouch! I am looking forward to reading what comes next! Amazing breakdown of info, Sebastian!

  6. jeff selig on 21 January, 2010  #link

    I love SEOmoz ever since my subscription to Mad Magazine expired, SEOmoz has been filling the void ;-) ROFL

  7. Alan Bleiweiss on 21 January, 2010  #link

    to clarify my last comment- I was referring to the new Open Site Explorer in terms of this week’s special offer.

  8. MikeTek on 21 January, 2010  #link

    “Making up numbers like “crawled 700 billion links” suggests a comprehensive index of 700 billion URIs. I highly doubt that SEOmoz did ‘crawl’ 700 billion URIs.”

    Where do they claim to have crawled 700 billion URIs? It’s 700 billion links existing on 55 billion URLs.

    Linkscape is what it is. Are competitors potentially going to see your inbound links? Yes.

    But blocking the Linkscape crawler in Robots.txt won’t get you far - the links are detected on those other websites that link to you. You’d have to get everyone who links to you to add that rule to their Robots.txt files. Going to happen?

    I say if you can use it for a proven competitive advantage it’s valuable.

  9. Sebastian on 21 January, 2010  #link

    The point is: once my competitors (and my IBL sources as well) are aware of the risks involved, they’d most probably block LinkScape in their robots.txt, too.

  10. Samuel Lavoie on 21 January, 2010  #link

    Good to point that out Sebastian, wasn’t really giving lot of value to all those numbers as well…
    I agree with you on the pedestal for some people in our industry, unfortunately that’s what fame is *often* all about. not?

    The final destination of these tools is null, It’s a game :\

  11. mohsin on 21 January, 2010  #link

    Brilliant as usual, from 2 days I was reading about OSE and LinkScape, although being a little experienced in SEO, I didn’t care their claims about crawling but “The Business Risks” is really a true factor, and one of friends have infact already used this tool from to get a list of backlinks of his competitors! And I strongly that SEOMoz has no right to impose on us their META TAG directive because this will grant thousand others the license to put our businesses at stake and lifes at hell!

  12. William Alvarez on 21 January, 2010  #link

    I can’t understand your ranting against SEOmoz, I see some rude language here in different lines. Anyway, they offer a neat tool that allows us to make better decisions, no SEO tool is 100% accurate or perfect, they all have pros and cons and every one uses them in different ways as per the campaigns goals and websites’ back-link profiles. Use it just to get a glimpse and to compared any website to direct competitors efforts. Also to track your own link acquisitions through time.

    It’s true they don’t have a copy of the entire index, but at least they did something good and are influencing the industry in a positive way.

    Any better tool than Linkscape and SEOmajestic you can mention?

  13. Branko on 21 January, 2010  #link

    “Is my take too extreme? Please enlighten me in the comments.”

    It is not extreme, its just that it isn’t new. the fact that they bought the index (partially)? That was known from the beginning. The fact that they don’t provide a satisfying way of blocking their bots (or the fact that they didn’t want to reveal their bots user agent)? Check. The fact that they make hyped statements to push Linkscape? Check.

    Michael VanDeMar was saying all of these things 459 days ago, since the day 1 of Linkscape, on that Sphinn thread and discussions around it.

    I don’t get the renewed excitement.

  14. Lord Manley on 21 January, 2010  #link

    Just to be clear, this refers to the US linkscape robot and not the very similar LinkScape® tool which LBi developed back in 2006.

    I guess we weren’t the only ones who thought it was a cool name.

  15. Bartjan - BajaCa on 22 January, 2010  #link

    So why haven’t you blocked MajesticSEO in your robots.txt? ;)

  16. Frank on 22 January, 2010  #link

    This is not extreme. I read the first time this information about SEOmoz and I’m very surprised. From my perspective, it is important to respect Web standards.

  17. Ian M on 22 January, 2010  #link

    The rumour I’d heard is that SEOMoz just buy’s Majestic SEO’s data - like SearchDNA does.

  18. Jack on 22 January, 2010  #link

    I dont have a problem with anyone searching for my backlinks. It is a good thing, because I am doing the same to find some new sources where I can get links from.

    But you are right, the numbers are a little too high ;)

  19. Sebastian on 22 January, 2010  #link

    Thanks for your comments, folks!

    Judging from the last somewhat heated debate (at Sphinn and various other places) on this issue, I’d like to leave a few general remarks, before I answer individual comments.

    I’m not attacking SEOmoz. In fact, I do like Rand and his crew. So lets keep the discussion calm, and focused on the issues I’ve discussed.

    LinkScape, OpenSiteExplorer, as well as Majestic’s link analysis, are all great and useful SEO tools. They come with risks, that’s just the nature of the beast. What you can learn about your competitor having such an instrument, your competitor can learn about you, too. Therefore such a tool must provide a rock solid method to opt out. Period.

    As stated in my pamphlet, the issues I’ve with LinkScape could be solved by SEOmoz. I hope they’ll work on the flaws. Until then, IMHO LinkScape is not kosher from an ethical as well as from a technical point of view.

  20. Sebastian on 22 January, 2010  #link

    MikeTek, that’s exactly what I said.

    However, some folks read the promotional statement “crawled 700 billion links” –as intended by SEOmoz– as “crawled 700 billion Web pages”, because that’s what a crawler usually does. A crawler fetches stuff and delivers it to an indexer. A crawler doesn’t crawl links, it crawls URIs (URLs if you prefer this term). We use clearly defined terms like “crawling” and “indexing” for a reason: to avoid misunderstandings. Therefore the bragging on OpenSIteExplorer’s root index page qualifies as misleading sales pitch.

  21. Sebastian on 22 January, 2010  #link

    William Alvarez, I’m not ranting against SEOmoz. I didn’t discuss whether SEOmoz has influenced the industry in a positive way or not. Unfortunately, it seems you didn’t read the complete pamphlet, or you didn’t get the points (otherwise I’d say your reply smells somewhat fanboy-ish). ;-)

    The question is not whether one or all of these tools provide results as expected or not. I’m not promoting any other link analysis tool; so sorry: no recommendation from here. Stick with MajesticSEO or OpenSiteExplorer/LinkScape.

  22. Sebastian on 22 January, 2010  #link

    Branko, the last discussion, started by Michael, has ended in a flame war. None of the problems he raised were solved since then. A year later SEOmoz does it again, with the launch of OpenSiteExplorer. That’s a good reason to finish the one year old discussion, in a calm, professional, and sober way.

  23. Sebastian on 22 January, 2010  #link

    Bartjan, I’m not blocking Majestic’s crawler in my blog’s robots.txt, because I don’t care. Here I’m not selling anything except pamphlets.

  24. Sebastian on 22 January, 2010  #link

    Ian, I cannot confirm this rumor. I doubt that SEOmoz purchases crawler results from Majestic, but I honestly don’t know.

  25. Sebastian on 22 January, 2010  #link

    Jack, the numbers aren’t too high. They just lack the right labels.

  26. Andy Walpole on 22 January, 2010  #link

    Hmm… I’m not overly impressed with the article.

    Okay, so maybe SEOmoz needs to clarify how they gained their data but trying to puff them up into a big bad wolf is over the top.

    Some of the arguments here are so trivial as to be pointless. Placing a line of SEOmoz data into your CMS template takes seconds, not “$$!”

    I hate all this “I don’t want my competitors looking at my backlinks” argument that was presented by a number of different people when Linkscape first went public. It’s like come on, it’s the internet: it’s very nature has left websites open to public inspection and I don’t see why backlinks should be any different to on-page factors. Ultimately, what a competitor can do to you, you can do to a competitor.

  27. Marios Alexandrou on 22 January, 2010  #link

    While it’s true that competitors can see what I see using these tools, that doesn’t worry me. It’s what you do with the information that matters and that’s what sets you apart from competitors. It’s like investing. Everyone has access to the same financial reports (let’s ignore insiders for the time being), but two different investors with the same information are going to act differently. One is going to win, the other isn’t. And by the time the loser looks at what the winner did to try and copy them, the winner has moved on to something else.

  28. Sebastian on 22 January, 2010  #link

    Andy, if you would have ever dealt with legacy code and shitloads of static pages on aged sites, you would know better. Site-wide code changes can become very, very expensive.

  29. Steven van Vessum on 22 January, 2010  #link

    Good article Sebastian! It’s good to get a different perspective on things for a change. You’ve got valid arguments. I’m curious where this discussion will go :) You’ve got another RSS feed subscriber anyways!

  30. Jason on 22 January, 2010  #link

    I kind of see where you are coming from, but respectfully, I think a lot of the complaints in this post are splitting hairs and it’s probably whiny to accuse SEOMoz of being unethical here.

    Stating you crawled 700 billion links does not suggest you crawled 700 billion URIs - some people might make that inference, but that is on the reader and not the statement itself. Granted, it is a bit of marketing-speak, but as far as marketing-speak goes, it’s pretty tame, whether or not SEOMoz’s tools technically “crawl” the web or not.

    As for the issue of opting out, why should they have to give anyone that option? You don’t own the data they provide, its publicly available, and all they do is aggregate and parse it. As much as any of us might not like the fact that SEOMoz is providing that data, the fact they do provide doesn’t make them unethical or evil

  31. Brant on 22 January, 2010  #link

    Amazing article, finally someone who is honest and tells it like it is. The best way to figure out if their claims are possible is to simply ask google their thoughts.

    Though it seems pretty obvious that its not possible. Ive never seen their bots in my logs.

  32. Ryan on 22 January, 2010  #link

    @jason above - why should we allow any bot access? We allow google yahoo and msn to eat up our bandwidth because they provide value - referred traffic. What is the value proposition of letting linkscape steal your bandwidth? So our competitors can spy on our back links?

  33. Will Reinhardt on 22 January, 2010  #link

    It feels like the tool would lose an incredible amount of its draw the moment you’re unable to spy on any of your top competitors. I can see how SEOMoz could argue that obfuscating the ability to opt-out of the index is an important business decision for them. That doesn’t make it right, but I’m sure that’s an internal discussion they’ve had.

    I think the overall point that they need to clarify is how being included in their index is a good thing for your site. We allow — and actively desire — Google to crawl our pages because of the symbiotic nature they have with the internet as a whole. They increase our targeted traffic while building their advertising revenue and nobody really seems to mind because everybody benefits.

    If I run a modest online store but I’ve never heard of SEOmoz or Linkscape, I’m at a clear disadvantage to a competitor that does know about it and is an active user. This feels parasitic, and my store is the one feeling the pinch, and I have no idea why. Even if I know the basics of SEO and have followed some best practice principles on my site, I’m losing sales to a competitor who’s using Linkscape.

    As a store owner who’s not a premium Linkscape member, what’s in it for me? Why should I participate?

  34. Jason on 22 January, 2010  #link

    @Ryan - Linkscape isn’t stealing your bandwidth, they are using info from other crawlers to source their index:

    SEOmoz obeys only <META NAME=”SEOMOZ” CONTENT=”NOINDEX” />, according to their sources page. And exactly this page reveals that they purchase their data from various services, including search engines. They do not crawl a single Web page.

  35. […] How do Majestic and LinkScape get their raw data?, […]

  36. […] a friend of mine, Sebastian, wrote a post titled, “How do Majestic and LinkScape get their raw data?“. Basically it is a renewed rant about SEOmoz and their deceptions surrounding the Linkscape […]

  37. Michael VanDeMar on 22 January, 2010  #link

    @Jason - according to Rand Fishkin, their CEO, they are absolutely not buying that data from someone else.

  38. Ben Maden on 22 January, 2010  #link

    Interesting, I will be a bit more vigilant now about SEOmoz’s sales pitches though the absolute numbers you cite aren’t really critically important to me as a customer of theirs. The set of tools are extremely practical and useful so I am still an SEOmoz fan.

    Clearly with the right thinking there are always opportunities to go beyond pure competitive analysis. LinkScape is out of date by the time it is published even more so if it processing and stitching together is required from various sources.

    This coupled with the moves by the search engines to constantly index and get info into SERPS faster means this lag will grow… until LinkScape becomes a serious Bot that you (and I*) will be able to block :)

    * Competitive analysis is great but if I can do it and then hide my work I’d love to have my cake and eat it too.

  39. Alan Bleiweiss on 22 January, 2010  #link

    My bottom line takeaway on this is that SEOMoz is clearly in the business of manipulating. not just data, but industry people, for pure profit. They’re doing so in beyond deceptive ways - not just as a standard part of their history. But with the manipulation of the closed Sphinn thread long after it closed, and if nothing else shouts out to us that we need to hold them accountable, that one action does.

    And anyone who believes in their data or their mozRank needs to wake up and recognize they’re being sold a bill of goods. They don’t deserve a nickle of income.

  40. wiep on 23 January, 2010  #link

    @ryan - blocking mjbot or any other bot in your robots.txt doesn’t block competitors from taking a look at your link profile, it only prevents your site from showing up in other websites’ link profiles…

  41. IncrediBILL on 24 January, 2010  #link

    Sebastian, I think you’re embarrassing yourself here.

    The content in LinkScape and OpenSiteExplorer may be initially seeded from some other source but it’s clearly crawling from their IPs because all of the sites I protect have a unique crawler ID embedded in the results and the crawls originate from which is DotBot

    Likewise, I have also investigated MagesticSEO and it’s the real deal, with embedded crawler IDs clearly linking back to actual distributed MJ12BOT crawlers.

    You all should be more worried about Google and focus your efforts on how to unseat them as the dominant search engine before it’s too late instead of worrying about such trivialities.

  42. Sebastian on 25 January, 2010  #link

    Bill, as for Majestic we agree, I wrote just that.

    As for crawling on behalf of SEOmoz, that’s a completely different story. If (I don’t doubt it) SEOmoz does stealth crawls with a user agent called “dotbot”, that’s not better than scraping with regard to the Robots Exclusion Protocol. They’re not “clearly crawling from their IPs”, because the site doesn’t tell they are a SEOmoz outlet. They just offer a 14 GB index of the Web. Everybody can download it for free.

    When SEOmoz crawls, they must do it under their own flag, and that includes offering a method to block their crawler (not some mercenary crawler, or Slurp, Googlebot, …) in robots.txt. The “seomoz” meta element is clearly not an acceptable procedure to opt out of LinkScape.

    Crawling and processing data everybody can buy at search engines and elsewhere is not the same. Mixing both in the way it looks SEOmoz does, is a concept that I can’t call ethical.

  43. […] How do Majestic and LinkScape get their raw data? ( Sehr spannend: durchsuchen die großen SEO Tools wirkliche das ganze Web? […]

  44. Darryl on 26 January, 2010  #link

    Trying to hide what is out there in the public domain for all to see anyway if they look hard enough and have the right tools seems pretty futile. If they didn’t exist then sure the information wouldn’t be so easy available to Tom, Dick or Harry but it would still be out there. Ranting about tools that simply process your freely available link data seems pointless, no doubt there are many in-house solutions doing similar and by removing the public ones you’re just raising the bar a little but they will still exist, you should get used to that!

  45. […] How do Majestic and LinkScape get their raw data? – Sebastian’s Pamphlets […]

  46. […] How do Majestic and LinkScape get their raw data? […]

  47. Mart on 26 January, 2010  #link

    One question left for me. Who is actually delivering the crawl data? Is it like cuil? Or yahoo/bing/google? Or a bunch of small and sneaky stealth buggers?

  48. Younus on 27 January, 2010  #link

    Majestic and LinkScape both definitely used some technical tool to get their raw data..

    [Nominated for this decade’s most intelligent comment.]

  49. rinkjustice on 28 January, 2010  #link

    I’m glad there are the brave few like yourself who will publicly decry Emperor Rand Fishkin and his “new clothes”.


  50. […] How do Majestic and LinkScape get their raw data? […]

  51. Léo, on 30 January, 2010  #link

    ok, possibly not the complete picture but, the SEOMoz Search Status plugin has had 685.000 downloads, and I just noticed that in their privacy policy it says that it’s going to send info back to Alexa :

    Maybe it also send a duplicate to SEOMoz, or maybe they have a deal to use Alexa’s data.

  52. Andy Beard on 19 February, 2010  #link

    Search Status is from Quirk, who are fun people to have a beer with

    Just because they use SEOmoz data doesn’t mean it influences crawl, it could only aid URL discovery.

  53. Get yourself a smart robots.txt on 25 February, 2010  #link

    […] and other Web robots are the plague of today’s InterWebs. Some bots like search engine crawlers behave (IOW […]

  54. Increase Traffic on 26 July, 2010  #link

    This post is just another GREAT example of what I always tell some beginners I mentor, NEVER take everything you learn literally. SEOmoz and ANY other “guru” should be heard, taken into consideration, analyzed, dissected, questioned and ultimately YOU as an internet marketer should implement your own strategies and come to your own conclusions. Not everything works exactly the same for everyone. And WE ALL know that there are lots of companies that will inflate their numbers in order to appear more attractive, established and important.

    Great read!

  55. SEOmoz LDA Tool – Just 3 Points on 7 September, 2010  #link

    […] An interesting situation for instance were claims about the source of their data for Linkscape. Sebastian covered it and did Michael (actually quite a […]

  56. Walter on 7 November, 2010  #link

    Great observation. I currently use SEOmoz because they give you API for $79/month I don’t think its such a bad deal. However, I was comparing their data to Majestic at work and I realized that Majestic is able to give a lot more results, especially when it came to unique domains, they are also more up to date then SEOmoz. So, I might consider switching to Majestic, especially reading that interview at seobook, Majestic actually uses their own servers to crawl millions of web sites daily and acquire those statistics.

Leave a reply

[If you don't do the math, or the answer is wrong, you'd better have saved your comment before hitting submit. Here is why.]

Be nice and feel free to link out when a link adds value to your comment. More in my comment policy.