Scoring our "Support" Category

  • 25 March 2015
  • 8 replies

We use "Support" as a scoring field and are still thinking through how to score our account.  Things we think about are: "is it good or bad if someone has no cases", "are a lot of cases always a bad thing", "without perceiving customer tone, how do you automate this scoring"....Has anyone else looked into this before?  If so any good recommendations for this? 

8 replies

Hey Zak, thats a struggle for us as well.  Another thought for you, are you doing any kind of post support survey?  Like you said, having support tickets are not bad in themselves, they actually can show interaction or involvement in the product.  But the real question is are they getting what they need in the support.   We are using a post support survey that we are linking into Gainsight, which is helping us grab further data on the satisfaction of the support received, which we can then look at working into a score for the account.  
Hey Chris, thanks for sharing that - awesome approach for that scoring.  For the customer scoring, I assume you get numeric rating and written feedback?
Exactly...the key is making it super short and easy.  We really have 4 questions that we ask for a rating of Exceptional, Good, Average, Poor or Very Poor.  the questions ask for Overall Satisfaction, Speed of response, Knowledge of support rep, friendliness of support.  Of course there is an open text area as well for comments. 
That totally makes sense.  Do you tie scoring to that?

I.E. Very Poor = 1= 25%, Poor = 2 = 50%, etc.

The written feedback sounds like quality feedback to add to notes or override the automated scoring (assuming that's what you use).
We have not yet tied scoring to it but that is the plan. This way we have a health measure for support that will by default be Green, but then be adjusted based on scores received.  (which will fold into the overall health 😵
Badge +2
Hi Zak, 

A few thoughts on this as well. A high number of cases/tickets isn't necessarily a bad thing however you might want to pay attention to the ticket age. If a problem ticket is open for for longer than the expected time frame a warning signal might need to be sent off. Also, if there is a sharp increase or decrease in the number of tickets this might be something to pay attention to as well. 
Hey Emily,

Good call on that; we have discussed this as well.  I checked, and as Chris was saying they use a survey system, I realize we do as well.  I'm thinking a combination of case age and survey (numeric) feedback could combine to create automatic scoring, and then based on and written feedback you could adjust it manually if needed.

Thanks for the help!
At the present, we are using a support case count over the past 2 months.   We score differently based on that total and continue to give better scores for case engagement/creation, then once the case activity reaches , say 6 for a two month period, we start reducing or place a lower (negative ) support score.

We are also looking have an average Case CSAT score per account and tie that into the support measure.