The Most Active and Friendliest
Affiliate Marketing Community Online!

“AdsEmpire”/  Direct Affiliate

Google: Controlling Crawling and Indexing

D

djbaxter

Guest
Google: Controlling Crawling and Indexing

Automated website crawlers are powerful tools to help crawl and index content on the web. As a webmaster, you may wish to guide them towards your useful content and away from irrelevant content. The methods described in these documents are the de-facto web-wide standards to control crawling and indexing of web-based content. They consist of the robots.txt file to control crawling, as well as the robots meta tag and X-Robots-Tag HTTP header element to control indexing. The robots.txt standard predates Google and is the accepted method of controlling crawling of a website.

This document represents the current usage of the robots.txt web-crawler control directives as well as indexing directives as they are used at Google. These directives are generally supported by all major web-crawlers and search engines.

Specific topics:
  1. Getting started
  2. Robots.txt specification
  3. Robots meta tag and X-Robots-Tag specification


<!-- end g-c-gc-home --><!-- end g-unit -->
 
MI
Back