@domenico.weimann
To disallow landing pages using the robots.txt file, you can follow these steps:
- Identify the landing pages you want to disallow access to by search engines.
- Create or navigate to the robots.txt file of your website. It should be located at the root directory of your website.
- Open the robots.txt file in a text editor or an FTP program if you're accessing it remotely.
- Add the following lines to disallow search engine crawlers from accessing the landing pages:
User-agent: *
Disallow: /landing-page-url1
Disallow: /landing-page-url2
Replace landing-page-url1 and landing-page-url2 with the actual URLs of the landing pages you want to disallow. You can add multiple Disallow lines for different landing pages.
Alternatively, if you want to disallow all landing pages, you can use a wildcard like this:
User-agent: *
Disallow: /*landing-page*
This will disallow any URL containing "landing-page" in its path.
- Save the robots.txt file. Make sure it is named exactly "robots.txt" and is located in the root directory of your website.
- Test the robots.txt file by using a robots.txt tester. Various online tools are available that allow you to test the file's accuracy and see how search engines interpret it.
- Upload the robots.txt file to the root directory of your website if you made changes remotely.
Remember that disallowing a page in the robots.txt file does not guarantee that search engines will not crawl or index it. While most search engines respect the robots.txt directives, some may still access and index the disallowed pages.