Thanks for sharing your insights and lessons learned along the way.
I ran an “annual” survey for a month at the end of 2022, but missed it last year due to being on leave.
The survey was about 10 CSAT style rating scales, validating the mission/vision of the community, evaluating their overall experience, and sharing their satisfaction level for individual community features.
If I recall correctly, I used a google form and just added it as a month long event on the calendar, featured the event on the home page, and sent an email or two. I also incentivized participation by offering points for completing it.
It was helpful for getting data I could use to 1.) show value/success of the community with stakeholders, and 2.) combat straw man arguments or irrational worries (ex. “the answers on the community are not good” was disproven by a very high CSAT score for the quality and timeliness).
This type of feedback is something I need to do more of, so I’m looking forward to learning from other replies in this thread.
Thanks @DannyPancratz, very good hint. I can imagine that at the same time we can extend the query to collect data about users. To build some kind of user profiles. Especially users who just lurk in the background. It is important to know your audience.
Do you remember did you got “enough” responses from them, were you happy about that?
We got responses from 11% users who were active during the 30 days we ran the survey. And a fairly representative sample of the types of users (customers vs partners, superusers vs less active, new users vs longtime users). Superusers were over indexed a bit, but that’s to be expected.
I was very happy with the sample size.
Oh wow, sounds really good
Surveying is an interesting topic! :)
A bit like you say, @revote - I think it’s really about finding a balance between getting a decent rate of engagement (and valuable insight) while minimising annoyance. In general, for what we might call ‘experience’ surveys (that aren’t focused on a particular page but on the overall experience and value of the community), I tend to follow these principles:
- A pop-up survey that is minimally intrusive. I’ve found that pop-up surveys are needed to get a decent amount of engagement, and it helps massively if the pop-up is elegant and doesn’t get in the way. I like a small, minimally annoying pop-up on the bottom-right of the screen. It shouldn’t stop someone from doing what they’re doing at that moment, but should be noticeable. And should be easy to minimise and come back to.
- A maximum of 5 questions. 3 or less is even better. For these kinds of surveys there are the well-known best practice questions to gauge value (‘did you find your answer’ etc...) and I usually like to have one true experience metric. I prefer a CSAT-style question over NPS, as NPS is a tricky metric if you’re trying to gauge the community experience. I’ve dabbled in CES, but I think that’s only really useful if used across the organisation. Actually, that goes for all of these metrics - ideally we’re using a metric that is used across the organisation so it resonates as a data point. An open text feedback question is always good too.
- Run the survey for a defined period. I don’t feel the need to run a survey like this all the time. A few weeks, a couple times a year, is enough data for these kinds of questions for me. This also minimises the annoyance for regular members (settings to ‘not show for 6 months’ aren’t always reliable).
You might remember that we had a survey like this running here a few months ago. We’ll do another one soon. We would have done it already, I think, but we’re exploring new tooling options.
One other approach I like is hyper-focused micro-surveys that are placed on a single page to get a very specific insight. Those can be incredibly useful when making UX decisions.
Thanks @Kenneth R for your thoughts.
Yeah, we also like and we use CSAT question. It is same across our platforms, so it is easy to compare.
What comes to max 5 questions, during the years, we used a survey with up to 10 or so questions. First of all, there was CSAT question and then we asked: “We have few question more, would you like to answer them as well?” If no, survey ended and if yes, we displayed rest of the questions.
With those questions we gathered data for community resolution rate / contact deflection.
Worked pretty nice, because user could choose to answer to only CSAT or rest of the questions.
We haven’t really seen a major difference in the answers, but at least we’re not making it sound that we always assume that user as has a problem, they could only be looking for experiences
This is excellent point. You have to think about what to ask but also how to ask. Small nuances makes big difference.
In 2023 I conducted a larger user survey which had 30 something questions, including versions of both “Did you get your matter solved” and “Did you find what you were looking for”. In my thesis I made some key findings:
With these findings and the experiences @DannyPancratz had with the annual survey, it seems that an “external survey” would be beneficial from time to time
We have had user survey because of the certain community project back in the days. Thank you @Suvi Lehtovaara and also thank you @DannyPancratz & @Kenneth R for pointing this out - I have to add it to my ToDo list again. And as said, I have to repeat it annually or maybe twice per year.
With that user survey we can build for example user profiles or user personas, what helps us to plan for example new content.
We’re not currently surveying beyond the usual deflection pop up, however I want to insert more superduper easy surveys where they make sense. For instance in the best answer email I want to have a quick thumbs up / thumbs down to the “did this actually solve your issue?” question.
At the same time I’m also thinking how I can survey each topic based on the (brilliant) intent matrix our good @Kenneth R came up with years ago. In control, we (our moderators) label each of our topics with one of five “intents”:
Problem / troubleshooting
Advice / education / how to
Interest / pre-sales
Feedback / ideas
Account / orders
This is all part of the tagging and bagging we do on all the content, so I’m sure there’s a smart way to get some great insights based on those labels, and some willing community members :D
The one thing I really want to avoid triggering is survey fatigue. I personally hate being bombarded with a ton of things I have to think about, then take a decision on, then click a button somewhere, when I just want to find an answer, information, or read some insightful words from a community leader. It all starts with the cookie blurb…..
We’re not currently surveying beyond the usual deflection pop up, however I want to insert more superduper easy surveys where they make sense. For instance in the best answer email I want to have a quick thumbs up / thumbs down to the “did this actually solve your issue?” question.
Really nice idea! Thanks.
In control, we (our moderators) label each of our topics with one of five “intents”:
Problem / troubleshooting
Advice / education / how to
Interest / pre-sales
Feedback / ideas
Account / orders
This is all part of the tagging and bagging we do on all the content, so I’m sure there’s a smart way to get some great insights based on those labels, and some willing community members :D
You mean that you read the labeled topics regularly or do you have other usage for this labeling? Build some statistics or so?
The one thing I really want to avoid triggering is survey fatigue. I personally hate being bombarded with a ton of things I have to think about, then take a decision on, then click a button somewhere, when I just want to find an answer, information, or read some insightful words from a community leader. It all starts with the cookie blurb…..
Yeah, I totally agree. And this is the reason I adjust the trigger time, to avoid fatigue as much as possible.
Cookie consent, chat bot pop up, newsletter promotion pop up, user experienve survey pop up and so on :D