Skip to main content

Navigating ChatGPT responses in your communities

  • February 3, 2025
  • 8 replies
  • 178 views

Hello community!

We recently have seen an uptick on members using ChaptGPT responses to answer members questions. There are often times when multiple “members” respond with almost identical answers and making it harder for our Champions to have visibility on their answers to these questions. How are you all managing the use of AI generated responses? Have you updated your T&C? Sent warnings or suspended members? Would love to hear how other communities are navigating this.

8 replies

  • Author
  • Contributor ⭐️
  • 2 replies
  • February 3, 2025

@Julian Can you help get some eyes and conversation on this?


DannyPancratz
Forum|alt.badge.img+7
  • VIP ⭐️⭐️⭐️⭐️⭐️
  • 942 replies
  • February 3, 2025

I faced this with one or two users a few months back and just talked to them directly about it; that stopped the behavior. But I know that I still have a lot of work to do something about it. 

On my list: 

  1. Update T&C
  2. Incentivize curation / flagging of AI content (points and badges) and empower our super users
  3. Edit any flagged posts with call outs that it has been flagged as suspected AI content (so they should double check, etc) 
  4. Address issues immediately with the user to curb the behavior

But we also won’t outright ban the behavior because we’re developing our own chatbot trained on our documentation. (it seems hypocritical to outlaw AI use when we also provide it; plus someone who is skilled at prompting can be a helpful contributor). 

My hypothesis is that when we integrate our own model to be the first reply (beta phase of our chatbot), it will reduce the behavior because it takes away the easy answer they can get via AI. The AI bot/agent will get all the low hanging fruit, leaving more nuanced and complex discussions for humans to chat about

Finally, when we get to that point, I will encourage users to be transparent in their use of the AI. Already a percentage of our questions are the “let me google that for you” equivalent on the community, sharing a previous post with the answer. I want the AI version of that to be “I asked the {company provided bot} and it said this ____. From my understanding, that should work, but you may want to consider ____” 
 

Also this recent post may be helpful for analogous ways to monitor for AI content: 

 


Chris Hackett
Forum|alt.badge.img
  • Helper ⭐️⭐️
  • 76 replies
  • February 3, 2025

Hi ​@Stephen.Trumble - Yes, we are currently dealing with this as well. We have started with allowing members to use AI to form responses. So far it has been limited and experts have pointed out when responses are not helpful. I haven’t had to ban any users for it yet. Below are the Terms we have updated for AI use. I’m curious as well what others are doing.

Using AI to assist with creating community replies

While using AI to assist with community questions is currently permitted, members are required to note it at the beginning of their reply. Something like "This reply has been created with the help of AI" As with any reply, members need to make sure it is applicable to the question.


Kenneth R
Forum|alt.badge.img+5
  • Gainsight Community Manager
  • 424 replies
  • February 4, 2025

This is a great and timely question.  We had a similar conversation a little while back (though a little more in relation to the risk of spam):

 


revote
Forum|alt.badge.img+2
  • VIP ⭐️⭐️⭐️⭐️⭐️
  • 783 replies
  • February 4, 2025
Kenneth R wrote:

This is a great and timely question.  We had a similar conversation a little while back (though a little more in relation to the risk of spam):

 

This is something I started to think about when I saw this topic. Thanks for sharing ​@Kenneth R.

 

Stephen.Trumble wrote:

We recently have seen an uptick on members using ChaptGPT responses to answer members questions.

Are you sure they are not bots?


Chris Hackett
Forum|alt.badge.img
  • Helper ⭐️⭐️
  • 76 replies
  • February 5, 2025

@revote - At least the ones in our community are not bots. Our external registration process probably takes care of most of those. These are real as I have had conversations with most of them. Unless they are really good bots wasting time on our little community🤣


revote
Forum|alt.badge.img+2
  • VIP ⭐️⭐️⭐️⭐️⭐️
  • 783 replies
  • February 5, 2025
Chris Hackett wrote:

@revote - At least the ones in our community are not bots. Our external registration process probably takes care of most of those. These are real as I have had conversations with most of them. Unless they are really good bots wasting time on our little community🤣

Are you bot? No.

😀


Blastoise186
Forum|alt.badge.img
  • Helper ⭐️⭐️⭐️
  • 535 replies
  • February 5, 2025

Hiya!

Just a heads up, my answers come from more of a Super User/Forum Volunteer perspective as that’s the role I play. :)

For me, I tend to find AI generated posts to be more annoying than helpful and have a preference towards just getting the Forum Moderators to throw banhammers at the users who use such tools, unless the AI content actually adds value. I don’t recall seeing anything AI-specific in our T&C but we tend to treat it as spam which is covered, since almost all the AI stuff I’ve seen to date is basically spam or some other form of junk anyway.

However

Given that Energy related stuff is annoyingly complicated and sometimes I have to dig pretty deep into the puzzle solving, I do sometimes use AI to help me with the research I do while writing posts or comments. If I need the help, normally I’ll just feed the question (or some other prompt) into Gemini, let it number crunch that and use whatever it comes back with as a starting point for further research but then do everything myself from there. I only copy/paste stuff directly back into the Forum very sparingly for something where the answer I wanted to write out is basically better written than I could do myself (and I’ve double checked it!) but I will always then tag it as having had AI assistance. This stuff only really makes up a tiny proportion of my content though as I find half the time, the AI messes up and goes off track anyway.

IIRC the only thing I’ve posted that has been majority AI written by Gemini so far was the instructions to do a specific gas meter test because I couldn’t find the existing guide on the Forum and I was feeling a bit too lazy to type the whole thing out myself for the 1,000th time so I just had Gemini do it (there’s only a small number of ways to write out the instructions for that test anyway!) and made sure the output matched what I know of that test. I also made sure to add a note to say that I’d used an AI tool to help write it out.

The biggest AI usage I tend to see though, is the most annoying stuff - spam. We see a fair amount of cases where some random user suddenly posts a weird comment that gravedigs an old thread with stuff that isn’t necessarily spam but definitely doesn’t help much. More and more, it triggers my AI detection tools whenever I do my checks...

For me, I’d say allowing your trusted users/super users/forum volunteers to use AI to help with their research for tricky questions is totally fine because that’s no worse than using Google to do research like we always did before all these fancy tools existed. I just prefer it when folks write out their own responses without getting a hivemind powered chatbot to do it for them. But I’d still suggest asking them to tag a note saying they used AI to help them with the research and set expectations to double check the answer first.

As for everyone else? That’s more tricky. I’m more likely to be OK with a particular piece of content if the member who posted it has clearly taken the time to double check everything, added their own thoughts and the post as a whole adds useful value to the Forum. If, however, it’s just an AI Spambot jamming up the Forum then you can be sure that SkyNet Blastoise here will whack it with the BanHammer of Doom 3000 before you can say AI. Ok, maybe not that fast, but you get the idea - I’ve got a good eye for that stuff!


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings