Skip to main content
Hey fellow  CS Camps, Question for ya. We are struggling with proving our adoption rate. We feel every use case is different so we need to keep a million scores to see what adoption really is and then we will only have it per customer. Sound familiar? My question to you, how do you do it? Do you have one percentage that that you say when you are being asked: what is your adoption rate? And how do you calculate it? Next step is, what is a good percentage? We are struggling with this. Would be absolutely awesome to hear your best practices! Thanks so much in advance! Martine  
Hi Martine,


The route that we've taken (and continue to push forward on doing better) is to have an analyst do extensive regression-testing on how customer behaviors correlate with renewal rate. This gives us an objectively-derived list of healthy customer behaviors, which the whole management team can get behind. Then we can then look for those behaviors in customers to give them a health score that roughly summarizes those varied inputs, and also hand them to our team of Onboarding CSMs to say, "Here's your checklist to move a customer out of the 'Adoption' stage -- the customer must conduct at least X out of these Y actions."





I think the key point is that product usage alone isn't good enough. You need to know what *kind* of usage or customer characteristics are most impactful, and that's simply not something that we could do that effectively without the help of a statistics expert.
Seth's comment is spot on. Every Gainsight client has a unique business model and customer mix and it comes down to what are the drivers in your business. Doing a careful look back to understand the reasons why customers didn't renew will let you set the appropriate triggers and health scoring to prevent next quarters churn. 
We believe in 2 possible approaches here:





1) Make a simple setup do start as soon as possible, using knowledge you already have of important behavior of your customers. Keep it simple. Doesn't need to be precise. With time (once a quarter, maybe), keep tweaking until you reach a desired accuracy, preferably mapping which are the most important behaviors in the product that should impact the most the score.





2) Get deep into statistic research of which customer behavior influences the most renewals. Do regression-testing if you have enough historical data. If historical runs reach a good level of accuracy, re-create your Adoption score using this new acquired knowledge.





I would say (1) is best to start and if you have fewer resources, while (2) works best when you already have knowledge on the topic and more resources. 





Here we started with (1) and after one year we are now moving to (2).





Cheers,


Bruno
We use two key product usage metrics to measure adoption:


1) user retention %: active users / invited users


2) use case adoption: # of use cases (in our product) that the customer is using





We have risk scorecards for those two metrics:


1) user retention %; 55% or more is green; 40-55% is yellow; under 40% is red


2) use case adoption: 4 or more is green; 3 is yellow; 2 or less is red





We have a data analyst in Business Operations who studies historical adoption/upsell/renewal data. He is able to prove with the data that customers with at least 55% user retention and at least 4 use cases are far more likely to expand (upsell) and renew. That is how we know the thresholds for R/Y/G, and also how we know what is "good" vs "bad."





We have recently launched an Implementation Services team, and are working through identifying TTFV (time to first value) metrics that we can verify -- again, with adoption/upsell/renewal data -- which metrics/milestones are the most valuable, and also when (in the customer lifecycle) those milestones need to be achieved in order to increase likelihood of upsell & renewal.
Hi Ryan,





Really interesting your approach. Could you share what methodology or algorithm has your data analyst followed to analyze the historical adoption? We're working on it right now and your sharing could help us a bit :)





Best,


Bruno
FWIW, here's the data-scientist-speak that came from our analyst, Dan McDade, about the methodology he used to identify the customer characteristics that most impacted churn. Don't ask me what it means, but maybe you'd have someone in your organization to whom it makes perfect sense :-)





This analysis focused on identifying the relationship between retention and the following attributes of the customer lifecycle:


1. Contract/Commercials Features


2. Product Usage - Total (measured in several different ways)


3. Product Usage - Depth (measured in several different ways)


4. Customer Support Volume, Ticket Time to Close, Ticket Time to Response


5. Usage of Specific Product Features





The dataset was aggregated per attribute grouping and then we trained two predictive models, random forests and a stochastic gradient descent model using an elastic net ( 15% L1 and 85% L2) regularization penalty. These two models were selected to analyze the predictive power and identify the most important attributes. The important features were identified based on their explained variance and based on the magnitude of the penalized coefficient values. The predictive quality of the model was optimized for accuracy = true positives (TP) + true negatives (TN) / population and analyzed by looking at the recall = (TP / TP + FN), precisi(TP / TP + FP), f1-score = (2* precision * recall / (precision + recall)), and ROC AUC values based on the TP, TN, FP, FN confusion matrix. The models that had higher accuracy and the features with the higher importance were then selected separately and combined into a single features set to identify the overall important features and the predictive power for these features.





A linear model (stochastic gradient descent with elastic net penalization) will be the primary output for the final analysis to quantify the impact specific actions have on the retention. The reason for this is so the model is clearly understandable and so the CSM’s have a simple and handy toolkit to identify potential steps to increase retention.
Hi Bruno,


I work with Ryan and he asked me to get into the details. We probably would also be characterized as transitioning from 1 to 2 in your post above, so no silver bullet, but I can tell you what we did and where we've struggled. This is specifically against product usage (specific metrics Ryan mentioned above plus a couple of hypotheses that we didn't keep), but could be expanded to anything numerical.





Our process is to divide our customers into cohorts of upsell, renew, customer (haven't had an upsell or renewal, but still a customer), and churned and then analyze the behavior of each group. The tricky part is defining the timing for behavior that mattered (since looking at usage one day before a customer doesn't renew is probably not that telling). To do this, we choose a "relevant date" for each cohort (e.g. the day the upsell opportunity closed), set that particular date for each customer as day 0, and look back 180 days for each customer at 30-day intervals (this period and frequency worked for us, YMMV).





Looking the box-plots for each cohort/metric/time confirmed there was some consistency and the approach was at least directionally correct. If you're comfortable with all the above and that your cohorts are cohesive enough to go ahead (there is probably a formal measure of this, we were in eyeball it and "good enough" mode), then the answers should jump out. Certain measures will show a clear distinction between "good" and "bad" customers. Others will be pretty muddy (technical term).





Kick the muddy ones out and set thresholds for others that put green firmly in the "good" group as far back in the timeframe as possible (some have had an upward trajectory... and we want whole numbers... so this is a mix of art and science). 





That's about it, lather/rinse/repeat on a regular basis with new hypotheses and to tweak thresholds. I'm often tempted to layer in some more robust statistics or machine learning and have tried both supervised and unsupervised approaches for my own edification, but the balance of a little robustness with a lot of clarity for the team has come out on top thus-far. 

Reply