The robots.txt is used to tell search engines to ignore certain pages when the crawl the internet looking for new websites, pages, and blog posts. Having a robots.txt file is only necessary when you have pages you want to list as no follow. Don’t create a robots.txt file unless you are submitting links that you do not want Google and other search engines to index.
Recently, the robots.txt tester in Google’s Webmasters tool received an update to do the following:
- Highlight errors causing Google not to crawl pages on your website
- Let you edit your file
- Test if URLs are blocked
- Let you view older versions of your file
Even if you think your robots.txt file is working properly sometimes double checking for errors or warnings will save you time and errors in the long run. Google’s John Mueller states, “Some of these issues can be subtle and easy to miss,” he wrote. “While you’re at it, also double-check how the important pages of your site render with Googlebot, and if you’re accidentally blocking any JS or CSS files from crawling.”