Skip to main content

We’re seeing errors flagged in Search Console because pages are indexed even though they’re blocked in our robots.txt file. A better approach to block them would be to add the meta noindex tag (follow and nofollow should both be options for us to select from) to individual pages and/or directories.

 

 

Support for this would be great. Robots.txt is not a perfect solution for indexation management as Google occasionally ignores it - especially if the URLs have been indexed in the past. 

We've resolved this issue through a script that checks the URL against a series of rules and injects / updates the robots element for the page - but native solutions are of course better as it doesn't require JS execution. 


'

Hi @LeslieH, thanks for sharing your idea with us. Currently it is possible to set meta index tags for each category page from control environment

 

What other pages do you want to have that flexibility? We also noticed that there are some conflicts between Robots.txt and canonicals inside some pages and actively working on them to improve. You can also find this idea interesting: 

I''d put this idea to ''open'' to collect more votes and feedback.

'
Updated idea statusNewOpen