Digital Campaigns - Metrics to define success?

  • 5 April 2023
  • 5 replies
  • 258 views

Userlevel 6
Badge +3
  • Gainsight Employee: Veteran Rookie
  • 29 replies

Before I post the question on which I would want CS enthusiasts to brainstorm on, I would like to thank all the folks who joined the Crank Up Webinar last week. I truly value your participation there! 😊

 

Now, let’s brainstorm 😇 on the question that I asked during the session;
 

 

 

Let me kick start the discussion. Below are some of the points that I would consider;

  • Tracking feature adoption pre and post campaign - Pull usage data and track the percentage change in adoption
  • Leveraging surveys for campaign - Rather than sending just a JO email to the customers, attach a survey question to it. This will help in understanding if campaign emails are being read by the customers or not
  • Best Practices requests - If there’s an increase in number of best practices requests from customers regarding the feature we started the campaign on, this is again a success

I could be right or wrong about the points I shared but I am open for thoughts.

 

During the session a few of the participants did answer this live,

@Harika Singadi said, ‘number of open emails’

@KMitchell mentioned, ‘Track open rate and click rate by templates’

@Paramasivayya said, ‘Health score, NPS score, Customer success surveys’

It would be great if you’ll can elaborate on these here in this forum :)) 

 

Let’s begin the discussion. Please share share your learnings/thoughts on the comments below 👇🏻


5 replies

Badge +7

Thank you for posting this, @yuniyal  It was an insightful webinar with a lot of valuable content around #DigitalSuccess.

 

To Track Success of Such Campaigns I feel that the below mentioned points would resonate with the audience …

  • Experience & Interaction with the campaigns -
    • Open/click rates
    • Interaction with the content driven through these outreaches in the form of page views, likes, engagement etc if that’s a 1:M campaign which could be hosted on a community.
    • Have a look out on adoption of you LMS during this season to see if it resulted in customers signing up for any training or other future webinars/classes etc.
  • Understanding the Adoption Trends, Health before, during & past the campaign season. 
  • Look out for any new Engagement / Meeting Requests

Open to knowing what other cool stuff can be tracked to gauge the success!!

 

 

 

Userlevel 6
Badge +10

A product can deliver value for an organization without all of its features being used or adopted.

I’ll take the example my colleague Robin always gives: Excel provides value to him, even though he only uses maybe 10% of the features. Gainsight provides value to us, even though we are not using success snapshots a lot (yet) since the redesign.

Using all the features of a product probably does make you sticky - I think there’s an argument out there (at least I’ve seen this going around) for CS being about making customers sticky (over making them achieve their goals) and this is certainly a good indicator, but isn’t the ultimate success in our books the renewal (and eventually, the upsell)?

Some of us are also not fully SaaS and the above is also consequently just a tad challenging 😅.

So here are some ideas for non-SaaS: 

  • Adding participants to SFDC campaigns to showcase the value of Digital CS-led adoption “campaigns” through renewal and (eventually upsell revenue - caveat: that’s not necessarily relevant for some customers who just won’t grow, renewal is more universal) - % renewal success for touched accounts vs. previous period / vs. untouched accounts
  • Correlating it with NPS trends and likelihood to renew (as an intermediate step)

We could also envisage the following for adoption campaigns that have an education take:

  • Ticket trends - maybe a high volume ticketer stabilizes back to normal? That depends on the organization though - I know I personally log a decent amount of tickets and I’m not sure how it’s seen by Gainsight.

Finally, I wouldn’t rely on open and click rates for they are NOT reliable in Gainsight nor anywhere, and they are vanity metrics (interesting to track but not to rely on too much) for the following reasons:

  • Spam checkers open emails and send (sometimes) open responses to the email server
  • Email servers are sometimes configured to NOT send open responses

I’ve had many a customer respond to an assessment, receive their recommendations and showing as “open = 0” “clicks = 0) 😄

Although, let it be said that I also firmly believe that a CS email being read or clicked on is a much deeper engagement than a marketing email being read and none of us should let marketing tell us that “email isn’t a deep engagement and shouldn’t be tracked into a campaign”. Digital CS is so entirely different to Digital Marketing. The relationship with the customer is different, the mindset from both parties is different - everything differs.

PS: I come from marketing 😅.

PS2: I completely agree with @Revant_Amingad - tracking organic requests for meetings/follow-ups and just organic responses to those emails.

Tracking adoption pre and post campaign / event / intervention is a practice we are working on standardizing and maturing. We are aiming to build a standard set of adoption behaviors analyzed pre and post.

 

We analyze pre and post adoption behavior across key product areas and self service resources and aggregate when needed into overall engagement.

 

To analyze change, we look at UNIQUE USERs and AGGREGATED CUSTOMERs. Our CS teams are most concerned with overall customer health / engagement. As we know, an aggregate customers is a set of many users. And when individuals are interacting, this is the smallest unit.

 

So curious - how do your teams think through assessing individual user behavior vs aggregate customer behavior?

Userlevel 6
Badge +3

@SeanDonnelly, we would need your expertise here. Please chime in and help @Brooke SaltLake  and other community members with the best practices.

Badge +1

Interesting question re: tracking unique users vs. what I’ll call account level adoption.

 

There are times when we’re trying to influence adoption at the end user level. This is often for products and features that are fully enabled/configured and fairly straightforward. It’s often most effective at the time a new user is introduced into an existing instance.

 

However, there are plenty of times when we need to influence their leaders. These aren’t the users we would be tracking to see if they themselves are adopting the feature. To influence the influencers, we may introduce a combination of events and email messaging (promotion outside of the product). Yes, we do track leading indicators for those. That’s everything from email opens/click-thrus to event participation based on a given campaign. 

 

I do not believe email open/click-thru data is worthless here. Yes, it’s not always accurate. But you will be able to compare campaign against campaign. I know what’s resonating or not because I have enough campaigns and volume for comparisons.

 

Influencing the influencer typically has a much longer wait period before we start to see results. That is often because we may be asking the influencer to brainstorm, coordinate cross functionally, get a new process in place, etc. Only after that occurs do we start to see an increase in feature adoption numbers as the end users are properly enabled and empowered to really utilize features. We would look at the account level feature adoption scores for the cohorts in the campaign, pre and post campaign. True, not all increases in feature adoption will increase overall health score and renewal. But again, volume and repeat campaigns will start to paint the true benefits.

Reply