search online web tools...

Robots.txt Generator

Create a robots.txt file to control search engine crawlers

Configure Robots.txt

Basic Settings

Common Directories to Block

Enter each path on a new line (e.g., /cgi-bin/)

Full URL to your sitemap file

Time to wait between requests (0-60 seconds). Leave empty for no delay.

Generated robots.txt

How to Use Your robots.txt File

Step 1: Copy or Download

Copy the generated robots.txt content or download it as a file using the buttons above.

Step 2: Upload to Root Directory

Place the robots.txt file in the root directory of your website. It should be accessible at https://yourdomain.com/robots.txt

Step 3: Test Your File

Use Google Search Console's robots.txt Tester tool to verify your file is working correctly and doesn't have any errors.

Step 4: Monitor Crawl Stats

Check your website's crawl statistics in Google Search Console to ensure search engines are respecting your robots.txt rules.

About Robots.txt Generator

Our Robots.txt Generator helps you create a properly formatted robots.txt file for your website. The robots.txt file is a standard used by websites to communicate with web crawlers and search engine bots about which areas of your site should or should not be crawled and indexed.

Use this tool to control search engine access to your site, protect sensitive areas, manage crawl budget, and improve your site's SEO performance. The generator creates a valid robots.txt file following the Robots Exclusion Protocol standards.

Robots.txt Syntax Guide

User-agent: *

Applies rules to all web crawlers. You can specify individual bots like "Googlebot" or "Bingbot" instead of using *.

Disallow: /path/

Tells crawlers not to access the specified path. Use "Disallow: /" to block all content, or leave empty ("Disallow:") to allow everything.

Allow: /path/

Explicitly allows access to a path. Useful for allowing access to a subdirectory within a blocked directory.

Crawl-delay: 10

Requests crawlers to wait the specified number of seconds between requests. Not supported by all crawlers.

Sitemap: https://example.com/sitemap.xml

Points crawlers to your XML sitemap location. Helps search engines discover and index your content more efficiently.

Robots.txt Best Practices:

• Always place robots.txt in your root directory

• Use lowercase for all directives and paths

• Test your robots.txt with Google Search Console

• Don't block CSS and JavaScript files needed for rendering

• Use sitemap directive to help search engines find your content

• Block duplicate content and low-value pages

• Don't use robots.txt for sensitive content (use authentication)

• Be specific with your disallow rules to avoid blocking too much

• Regular review and update your robots.txt as your site evolves

• Include multiple sitemap URLs if you have multiple sitemaps