Skip to main content
Open

SEO Improvement: Linking

Related products:CC Analytics & Reporting
  • November 4, 2021
  • 3 replies
  • 67 views

Suvi Lehtovaara
Forum|alt.badge.img

We’ve had an SEO audit done on our community. There are a number of improvements we’ll present here as ideas. This one is about linking.

Google has shown to ignore robots.txt on more than on occasion, so simply blocking these in robots.txt is not a surefire solution. A more robust solution is relying on rel=nofollow and robots meta tags to guide crawler / indexation behaviour. Hence we propose a few ideas that would be beneficial in assisting crawler behaviour and guiding crawlers to the most relevant content. Ideally these would be integrated as toggle-able functions in order to allow users to customize crawl behaviour based on their needs.

  • Sorting urls - Allow a rel="nofollow" to sorting <a hrefs>, and Add meta robots noindex/nofollow directive to pages with ?sort query present. This'll stop crawlers from having to go over multiple iterations of the same pages.
  • PostId - Allow a rel="nofollow" to certain <a hrefs>, namely for links featuring ?postid queries and add meta robots noindex/nofollow directive to pages with ?postid query present.
  • Member pages - Allow a rel=nofollow attribute for URLs referring to them
  • Search pages: Allow a meta robots noindex/nofollow directive to search pages.
  • Pagination - Add a "jump to last page" button as part of pagination

3 replies

dilekgencer
  • Gainsight Employee ⭐️
  • November 29, 2021

Hi, @Suvi Lehtovaara thanks for your idea. We will set a meeting with developers to check ideas around SEO.
I'd still put this idea to 'open' to collect more votes and feedback.


dilekgencer
  • Gainsight Employee ⭐️
  • November 29, 2021
Updated idea statusNewOpen

  • Contributor ⭐️⭐️
  • January 25, 2023

Here’s additional information to this idea. We wanted to share a use case and potential solution.

The way indexation and canonicalisation is arranged on the Insided's pages isn't entirely SEO friendly. Often robots.txt, X-Robots header and Robots meta element are considered identical to one another, but there's actually a fundamental difference.

Sometimes, we have pages or sections that contain duplicate or verbatim copies of content (e.g. /members) that have valuable links for page rank but generally speaking we don't want to index. For that,

Potential solution could be to add a control panel that allows us to specify what the robots status each segment should get, e.g.: 

  • Robot status /member pages: [Index,Follow ], [ Noindex, Follow ], [Noindex, Nofollow ].
  • Robot status Search Pages: [Index,Follow ], [ Noindex, Follow ], [Noindex, Nofollow ].

Add a control panel that allows us to specify "rel=nofollow" to internal links.

  • Add "rel=ugc" qualifier to links in user generated content.

Sources:

https://www.searchenginejournal.com/google-pages-blocked-robots-txt-will-get-indexed-theyre-linked/255911/ 
https://developers.google.com/search/docs/crawling-indexing/block-indexing