Monday, May 20, 2024

Are Googles Content Spiders Being Duped?

Google is reported to have experimented with an algorithmic tweak creating a more aggressive duplicate content filter, much to the chagrin of webmasters who operate in-depth and content-rich websites. The fear is that not only is duplicate content being penalized across sites, but also within sites.

Many feel Google’s revamped spiders are unable to discern true duplicate content from highly detailed subject matter on subsequent related pages within one website, and thereby inadvertently penalize legitimate websites by dropping those pages from search results.

As voiced in this thread at WebMasterWorld and echoed by a string of other webmasters who have noticed a steady decline in SERP placement over the past year, the so-called Google “miscalculation” has caused some support pages to be dropped altogether.

For example, a site provides information on an array of related products that fall under the same larger category. Within that category, there are various types of similar (but not the same) products, each product with its own unique factors. Webmasters are becoming increasingly concerned that Google is tagging those subsequent information pages as duplicate content.

“Caveman” provides the hypothetical example:

“We help a client with a scientific site about insectsThere are many types of bees. And then there are regional differences in those types of bees, and different kinds of bees within each type and regional variation(worker, queen, etc).

Now, if you research bees, and want to search on a certain type of bee – and in particular a worker bee from the species that does its work in a certain region of the world, then you’d like to find the page on that specific bee. Well, you used to be able to find that page, near the top of the SERP’s, when searching for it. Then in mid Dec, you could find it, but only somewhere in the lower part of the top 20 results.
Now, G is not showing any pages on bees from that site.”

Caveman intimates the problem persists (and perhaps worsens) over the course of the next 10 months.

Four pages worth of responses later including a range of examples and theories reveals that little is known as to why this is occurring, or if indeed it is occurring exactly in the way Caveman describes.

Pontifex plays devil’s advocate with this explanation:

“If you have some ‘bees’, which have (eg) 200 words of information and only 15 words (color, function, region) differently, your 3 subpages per bee are highly redundant and your site might be not structured well.

Means: your previous top position in Google was undeserved, because you have highly redundant pages around 1 topic, which could be on 1 page, not 4. The 3 pages around that one topic (long tail, as you said) gave 3 backlinks to the main page, I presume. That boosted the main page up – artificially though.

From what you describe, I am even more sure, that the dupe filter especially applies to dupe pages interlinking, fighting the effect, that people try to spin off more pages from one content source, creating ‘on-site linkfarms’.”

Other explanations tinker with the idea that websites with a large number of pages are becoming targets as well, as sites with only a few pages don’t seem to be affected.

Have you experienced similar issues with Google’s apparent algorithm tweak? Discuss in WebProWorld.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles

Social media marketing.