Skip to main content

Has anyone developed an ongoing maintenance plan for their Gainsight instance they’d be willing to share? We went through our SFDC->NXT transition this Spring and it just reiterated how quickly old reports/dashboards/rules, etc. can build up in our instance. Now that we’ve ‘started fresh’ we’re trying to figure out the best way to keep our instance clean by setting up a more formal weekly/monthly/quarterly/annual maintenance schedule. Even if you don’t have this established today, what would you want to include?

Oftentimes it’s complex to determine how and why a rule, report, JO program or field was created. Rather than reviewing every Gainsight feature systematically, we instead try to audit our processes whenever we need to make revisions, based on a set of QA standards that our admin team has agreed on. Whenever we have a planned revision to the process that touches, for example, our Stakeholder Alignment process, we try to evaluate all of the components, inputs, and outputs via Object Analyzer and Data Management > Dependencies to see when and how the data is being moved, and we clean up fields, rebuild rules, and if needed delete fields, deactivate rules, adjust schedules etc. This breaks the monolithic tasks of reviewing all of one’s rules up, and ensures that the persons working on the task consider it holistically whenever they need to make updates. It helps us to cut down on temporary rule and field creation as well, and gives us a better chance to notice systemic problems before they impact our customers. 


We do have a retirement process for some of our assets in Gainsight.

For example with rules, when a new one is created, the old version isn’t immediately deleted when it’s replaced, instead it gets a prefix added to the name that includes the deletion date, it is deactivated, and then moved into a “Deactivated” folder.

We then will review that quarterly and delete items we no longer need. Similar actions are taken for reports, and we also clean up assets created or cloned for testing or support purposes. You could also build a dashboard using the GS Asset Usage object to get an idea of what is/isn’t in use and re-evaluate areas that haven’t been touched in a certain time.

 

Specific Fields and business processes can be a little more complicated. You could report on closure/completion rates for Rule generated CTAs and see if there are indications they’re no longer fit for purpose (e.g., high % are overdue by a wide margin, playbooks aren’t being completed, etc.).

Fields you could go to Data Management and see if a field has no dependencies which may indicate it isn’t in use. Fields that are ‘in use’ but no longer have much utility anymore would be harder to spot, but if you’re concerned about field bloat you could review custom fields every X months if you have time for that, and ideally, have also documented why those fields were created originally.

 

For JO programs, you could look at dropping response rates, or lack of meaningful data captured from Surveys, things like that. 

 

Some of the challenge is resourcing - if you’re building a lot do you have time to review everything periodically?

Some of the challenge is criteria - what constitutes a ‘Stale” asset?

 

I think it starts with how you build and document in the first place, similar to what @sdoty is saying - if you have a good build and documentation process in place it can make your clean-up much easier. You’ll know if the new thing you built replaces something else, or if the change you made means something else was made redundant and can be decommissioned.


Reply