Robots.txt Generator
Configure crawler access for search engines and AI bots, then download your robots.txt file.
Quick Presets
Crawler Access Control
User-agent: * (All crawlers)
Default rule for all unspecified bots
Search Engines
Googlebot
Google's primary web crawler
Bingbot
Microsoft Bing's web crawler
Yandexbot
Yandex search engine crawler
DuckDuckBot
DuckDuckGo's web crawler
Baiduspider
Baidu search engine crawler
Slurp
Yahoo's web crawler
AI Crawlers
GPTBot
OpenAI's web crawler for training data
ClaudeBot
Anthropic's web crawler
Google-Extended
Google's AI training crawler
CCBot
Common Crawl bot used for AI training
PerplexityBot
Perplexity AI's web crawler
Bytespider
ByteDance/TikTok's web crawler
Amazonbot
Amazon's web crawler for Alexa
Social & Other
Applebot
Apple's web crawler for Siri and Spotlight
Twitterbot
Twitter/X link preview crawler
facebot
Facebook's link preview crawler
LinkedInBot
LinkedIn's link preview crawler
Additional Settings
Helps search engines discover all your pages
Minimum seconds between requests. Google ignores this; Bing and Yandex respect it.
Generated robots.txt
User-agent: * Allow: / Disallow: /admin/ Disallow: /private/