Google’s Adam Lasnik has offered up a post today on Dealing Deftly with Duplicate Content. In it, he offers up the official Google stance on the issue.
In this post, he addresses the following key issues:
- What is duplicate content?
- What isn’t duplicate content?
- Why does Google care about duplicate content?
- What does Google do about it?
- How can Webmasters proactively address duplicate content issues?
However, there are issues that are not addressed in Adam’s post (fyi – these are issues which it’s not really Adam’s job to point out …). Here are a couple of key ones:
- When you pages on your site that are duplicates (or near duplicates) of one another, the crawlers spend time crawling them, instead of other pages on your site. This can result in fewer indexed pages on your site.
- Since Google chooses which one of your pages to list, this means that some pages are not listed. However, the page rank that was passed to those pages, still gets passed to them. This means it’s wasted on those pages that do not get indexed, instead of being allocated to pages that are indexed.
There are many other issues with duplicate content, but these are among the biggest. It can be hard to resolve duplicate content problems, particularly if they result from poor site architecture, or the implementation of your content management system. But if your web traffic is a key part of your business, it’s well worth the effort.