`robots.txt` Generator
Easily create or update your `robots.txt` file to control search engine crawler access to your website.
Global Directives
Not supported by Googlebot, but still used by other crawlers.
Custom Directives
No custom directives added yet.
How to Use the `robots.txt` Generator
The `robots.txt` file is a set of instructions for web crawlers (like Googlebot) that tells them which parts of your site they can or cannot crawl. A properly configured `robots.txt` is essential for good SEO.
- 1. Set Global Directives:
- Default User-agent: Usually
*
(for all bots). You can specify a particular bot likeGooglebot
if needed. - Crawl-delay (optional): Specify a delay in seconds between consecutive requests from a crawler. Note: Googlebot does not support this, but other crawlers might.
- Sitemap URL (optional): Provide the full URL to your XML sitemap, e.g.,
https://yourwebsite.com/sitemap.xml
.
- Default User-agent: Usually
- 2. Add Custom Directives:
- User-agent: Specify which bot this rule applies to (e.g.,
*
for all, orBingbot
). - Directive: Choose "Disallow" to prevent crawling a path, or "Allow" to explicitly allow a path within a disallow.
- Path: Enter the path you want to control (e.g.,
/admin/
,/private/page.html
). Use/
for the root directory. Wildcards (*
) can be used for patterns. - Click "Add Directive" to add the rule to your list. You can remove directives by clicking the icon.
- User-agent: Specify which bot this rule applies to (e.g.,
- 3. Generate `robots.txt`: Click the "Generate `robots.txt`" button to see the final content.
- 4. Copy or Download: You can either copy the generated text to your clipboard or download it as a `robots.txt` file.
Important Notes:
- `robots.txt` is a suggestion, not a mandate. Malicious bots may ignore it.
- Disallowing a page in `robots.txt` does not guarantee it won't be indexed. To prevent indexing, use `noindex` meta tags.
- Place your `robots.txt` file in the root directory of your website (e.g.,
https://yourwebsite.com/robots.txt
).