More about this content:
Robots.txt is a text file webmasters create to instruct web robots (typically search engine robots) how to crawl pages on their website. It is part of the the robots exclusion protocol (REP) which is a group of web standards that regulate how robots crawl the web, access and index content, and serve that content up to users. The REP includes directives like meta robots, as well as page-, subdirectory-, or site-wide instructions for how search engines should treat links. It is important to place the robots.txt file in the main directory of the website for it to be found by user agents. With robots.txt, users can control crawler access to certain areas of the site and specify the location of sitemaps. Meta and x-robots are meta directives and can dictate indexation behavior at the individual page level.