A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This file should be located on the root of the website and should be accessible via https://example.com/robots.txt
You can allow or disallow website URLs or paths in robots.txt that a bot reads while crawling.
This is how a normal robots.txt looks:
example.com/robots.txt
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /test/
Sitemap: https://example.com/sitemap_index.xml