Happy Birthday, Robots.txt! The file known primarily for its ability to tell search engine crawlers which pages to crawl or ignore is celebrating its 20th anniversary.
The Robots.txt concept was conceived in 1994 by Martijn Koster, who wanted to tell search engines like Lycos and Alta Vista which pages to crawl, and which to ignore on his website. Robots.txt was quickly adopted by most search engines, and is still commonly used today.
How can I use Robots.txt today?
When search engines like Google scan one’s website, they first check to see if a Robots.txt file is present. Through this file, one can tell Google using the Disallow: / command to specify which pages it wants the search engine to ignore.
Why would I want search engines to ignore content on my website?
Simply put, if you don’t want certain pages to index and appear in search engine results, you should use a Robots.txt file. On the other hand, if you have a smaller website with a limited number of pages, you should be careful about which pages you tell crawlers to ignore. Disallowing too many pages could decrease your chance of being found for relevant terms.
For more information on Robots.txt, visit https://www.robotstxt.org/.