# This robots.txt file controls crawling of URLs under https://example.com. # All crawlers are disallowed to crawl files in the "includes" directory, such # as .css, .js, but Google needs them for rendering, so Googlebot is allowed # to crawl them. User-agent: Googlebot Disallow: /nogooglebot/ User-agent: * Allow: / Disallow: /https://www.himalayanglacier.com/register/ Sitemap: https://www.himalayanglacier.com/sitemap_index.xml