For web publishers who have worked hard to produce original content, it can be very frustrating to find that others have piggy-backed their way to success on your efforts.
Scraping – the process of copying content from a site to reuse it on another – can place a serious toll on businesses, slowing down servers and eating up valuable bandwidth. Often it will be unscrupulous rivals who carry out scraping, leaving the original site lagging behind in Google’s rankings.
Now, a new tool called the Google Scraper Report, promises to offer some help to those affected by scraping. While the tool itself does not do anything to fix the situation, it does allow users whose content has been scraped to submit their own URL, and the URL of the site that has subsequently outranked them. This information is subsequently passed on to Google to evaluate, and this simple process may then yield a useful set of data on offending websites, allowing further action to be taken.
As yet, the tool will not do anything more than collate information on which sites have been deemed as scrapers. Websites will not be removed, although it is possible that the data may improve the search ranking system, promoting original sites above scraper sites. It also has a potential drawback, as it could enable someone to report a site as being scraped when this is not the case. Despite the potential drawbacks, Google’s new tool suggests that they are taking steps to meet a rising need for higher quality search results, in keeping with competitor announcements from Bing and Yahoo on plans for significant improvements in their own search engines.