How to disallow landing pages using robots.txt file?

by domenico.weimann , in category: SEO , 6 months ago

How to disallow landing pages using robots.txt file?

Facebook Twitter LinkedIn Telegram Whatsapp Pocket

1 answer

by genevieve_boehm , 6 months ago

@domenico.weimann 

To disallow landing pages using the robots.txt file, you can follow these steps:

  1. Identify the landing pages you want to disallow access to by search engines.
  2. Create or navigate to the robots.txt file of your website. It should be located at the root directory of your website.
  3. Open the robots.txt file in a text editor or an FTP program if you're accessing it remotely.
  4. Add the following lines to disallow search engine crawlers from accessing the landing pages: User-agent: * Disallow: /landing-page-url1 Disallow: /landing-page-url2 Replace landing-page-url1 and landing-page-url2 with the actual URLs of the landing pages you want to disallow. You can add multiple Disallow lines for different landing pages. Alternatively, if you want to disallow all landing pages, you can use a wildcard like this: User-agent: * Disallow: /*landing-page* This will disallow any URL containing "landing-page" in its path.
  5. Save the robots.txt file. Make sure it is named exactly "robots.txt" and is located in the root directory of your website.
  6. Test the robots.txt file by using a robots.txt tester. Various online tools are available that allow you to test the file's accuracy and see how search engines interpret it.
  7. Upload the robots.txt file to the root directory of your website if you made changes remotely.


Remember that disallowing a page in the robots.txt file does not guarantee that search engines will not crawl or index it. While most search engines respect the robots.txt directives, some may still access and index the disallowed pages.