The robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorize and archive web sites, or by webmasters to proofread source code. The standard complements Sitemaps, a robot inclusion standard for websites.
A robots.txt file on a website will function as a request that specified robots ignore specified files or directories in their search. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operate on certain data.
For websites with multiple sub-domains, each sub-domain must have its own robots.txt file. If example.com had a robots.txt file but a.example.com did not, the rules that would apply for example.com will not apply to a.example.com.
The protocol, however, is purely advisory. It relies on the cooperation of the web robot, so that marking an area of a site out of bounds with robots.txt does not guarantee privacy. Some web site administrators have tried to use the robots file to make private parts of a website invisible to the rest of the world, but the file is necessarily publicly available and its content is easily checked by anyone with a web browser.
The information specifying the parts that should not be accessed is specified in a file called robots.txt in the top-level directory of the website. The robots.txt patterns are matched by simple substring comparisons, so care should be taken to make sure that patterns matching directories have the final ‘/’ character appended, otherwise all files with names starting with that substring will match, rather than just those in the directory intended.
This example allows all robots to visit all files because the wildcard “*” specifies all robots:
This example keeps all robots out:
The next is an example that tells all crawlers not to enter into four directories of a website:
Example that tells a specific crawler not to enter one specific directory:
Example that tells all crawlers not to enter one specific file:
Note that all other files in the specified directory will be processed.
Example demonstrating how comments can be used:
# Comments appear after the “#” symbol at the start of a line, or after a directive
User-agent: * # match all bots
Disallow: / # keep them out
The Sitemap parameter is supported by major crawlers (including Google, Yahoo, MSN, Ask). Sitemaps specifies the location of the site’s list of URLs. This parameter is independent from
User-agent parameter so it can be placed anywhere in the file.
An explanation of how to author SiteMap files can be found at sitemaps.org.
Several major crawlers support a
Crawl-delay parameter, set to the number of seconds to wait between successive requests to the same server:
Some major crawlers support an
Allow directive which can counteract a previous
Disallow directive. This is useful when you disallow an entire directory but still want some HTML documents in that directory crawled and indexed. For example:
An Extended Standard for Robot Exclusion has been proposed, which adds several new directives, such as Visit-time and Request-rate. For example:
Request-rate: 1/5 # maximum rate is one page every 5 seconds
Visit-time: 0600-0845 # only visit between 6:00 AM and 8:45 AM UT (GMT)
The first version of the Robot Exclusion standard does not mention anything about the “*” character in the
Disallow: statement. Modern crawlers like Googlebot and Slurp recognize strings containing “*”, while MSNbot and Teoma interpret it in different ways.