I visited the 5 Star Affiliate Programs Forum and saw the post i quote hereunder apparently written in the thick of the content controversy and observed that the post seems to have made a lot of “converts” and in fact still remains the “authority” on the subject even till now.

I had no choice than to write a counter-post to clarify issues so that a lot of people are not continuously misinformed. It is for the same reason that i deem it necessary to put both the OP’s post and my reply on this blog.

Happy Reading!

Hello everyone,

I have been seeing in a lot of threads the misunderstanding of . I am writing this to eliminate the misunderstanding so that fear of something does not effect your online marketing.

I guess the first thing is to cover the assumptions of the masses of what it is, and then show you the reality of it.

I will give you actual numbers and links so that you can check it out yourself. Then you can see for yourself with proof that there are a lot of people out there spreading “Rumors” instead of facts.

Let’s get started. Most people think that syndicated articles or distribution to multiple places will get the articles removed from the index of Google or Yahoo. If this were true then 99% of articles, news releases, all RSS feeds would be removed from the indexes.

MSN, CNN, ABC, NBC, Google News, Yahoo News would all be violating their own rules.

They use the same news stories and syndicate the news from the contributing reporter/author, And if they are breaking the rules, do you think the algo’s of the search engines would catch 40 k occurrences of the same story? I am sure it would, and if these sites keep breaking the rules they would get penalized. In fact, they would have long been removed from the indexes.

Article directories use the same methods, they post ”Duplicate” articles to their directories from the same authors. And you can find hundreds of the same “duplicate articles” indexed in the search engines at any time.

Here is a test for you, and I will not show you one of my articles so that I can not be accused of manipulating the results. Go to Free Articles and pick an article, any article (in fact do it on several articles) and copy the title of the article.

Then put that title in quotes like this “Your Cold Could Be Something More” and paste it in Google and/or Yahoo In Yahoo, here is the search link with the results: “Your Cold Could Be Something More” – Yahoo! Search Results

Take a look. It is showing 1,400 times in Yahoo. Let’s check Google. Here is the search link for Google: “Your Cold Could Be Something More” – Google Search and it is showing over 400 times in Google.

Now this article was released in 2005. Here is the publish date: Oct 22nd 2005. Now, If the search engines have had 23 months to filter out “duplicate content” I am sure that at least Google or Yahoo would have removed them from their indexes by now. But they have not.

This is just one article. Do several yourself to see. There are millions of articles out there published on multiple directories and the directories are ok, they have not been punished and they are repeat offenders also by the public standards of Duplicate Content.

Now we will go over what “duplicate Content” is and why it is in place.

Duplicate content is when you have an exact copy of a site. Page for page, file name for file name, image for image, code for code. And exact replica of a page or site. This was put in place for dynamic page spawners, duplicate websites, and doorway pages that were designed by blackhat seo’s and spammers that are trying to control the natural search results.

Again, I have tested this also. Take an exact copy of a site and try to push up an identical site into the top of the search engines with links. One will be removed from the index, not just one page, but the entire site. And yes, I have done this test 6 times with the same results every time.

Each article directory, Press release site and even the pages that host RSS feeds have different code, images, file names, java scrips and a multitude of other differences that stops them from getting hit by duplicate content.

There is more on a page than your article when a directory publishes your article. Search engines read all of the code, not just your articles text on the page. You do not have to worry about duplicate content with articles or press releases.

Hope this clears some things up! And remember, Hope is not a method… Nor is it a strategy. Study, test and stop buying all of the lies on the net. It is mostly common sense.

Thank you for your explanation on duplicate content.

I respectfully differ on a number of points.

Right from the outset, may i state that a lot of faulty deductions have been made about “duplicate content” which actually was never stated by Google nor was Google’s intention, hence the confusion.

Next. On what authority do i speak? I have done an in depth research on this particular topic as depicted by the post on it on my blog which i have not stated the link to here to avoid been labeled as self serving. On request however, i am willing to post the link for details on this topic.

Now, any content that is significantly the same (does not necessarily have to be 100%) whether on the same site or across sites is duplicate content. The issue however is that duplicate content is not necessarily penalized by the search engines.

About the only reason stated by Google for penalizing is ” Duplicate content with malicious intent or deceptive in origin”. Apart from this, the only other case which actually is not a penalty is where only one preferred version (as determined by Google) of a duplicated or replicated web page on the same site is indexed while the others are not (where the duplicate is not deceptive in origin nor did it arise with malicious intent)

Thus the illustration made by the OP arrives at the right conclusion that duplicate content arising from duplicate article submissions is not removed from Google or other search engines index, but in my view, for the wrong reasons. It is not because the codes, images, file name, java scripts etc on the different sites web pages differs it from that on other sites but rather simply because it is not Google’s policy to penalize such duplicate content.

One must however be careful here in that presence in Google index does not mean ranking highly in the index. Again, apart from the primary index, there is also the supplementary index.

Even though not also directly stated by Google, it is unlikely that a content which is significantly the same with another will feature on the first page of Google. Distinct this however from “spinned” articles or the application of “article leverage” which may have made the articles significantly different, even if bearing the same title.

Regarding content on the same site, it is also not simply based on the same website codes, images, file name, java scripts etc that duplicate content is not indexed but rather purely on the policy pronunciation of Google on this. Even if the codes, images, file name, java script etc. on different pages of the same site differ as it happens on different websites, the duplicate content will still not be indexed as long as it is on the same domain.

**** If you enjoyed reading this post, be sure to fill out this form to receive notification via e-mail once any new blog post is published. You will be able to see the post title and if it piques your interest, you can simply click over to my blog.

You can leave a comment below, picking up one or two dofollow backlinks in the process, as the case may be, since this blog is dofollow and has keywordluv and commentluv plugins enabled.

Do not forget to share this post with your friends and followers. Remember sharing is caring! ****