Could it be that the post was answered by a human who doesn’t speak Finnish and used ChatGPT to translate/reword their answer? I have certainly done that before.
Personally I think the only content that should be removed is content that is actual spam or something nefarious.
Could it be that the post was answered by a human who doesn’t speak Finnish and used ChatGPT to translate/reword their answer? I have certainly done that before.
I understand if someone asks help, and uses AI to translate the question. But this post isn´t question, someone just replied to the very old topic and gave instructions, how to solve the topic´s problem.
Personally I think the only the content that should be removed is content that is actual spam or something nefarious.
Generally speaking, I agree.
But, search engines recognize content what is made using AI and they don´t give high rank for the content. Search engines maybe even hide the AI content? One individual post isn´t problem but how about in the future - what if most of the posts are made using AI and AI only?
I think we need a strategy for the content what is created using AI in the community.
Hey @revote - that’s a super interesting question - it’s one we ponder ourselves! I’ve been fighting spammers in communities for many years, and have seen all kinds of approaches. Some spam is obvious and some is subtle - like the ones where a relatively believable account is created that first creates a few genuine-feeling posts, and then suddenly it sneaks in a spammy link into the next post. Or they start sending spammy direct messages to other members.
But then there are times where we simply aren’t sure. We face this in our community here occasionally. An account that feels just a little bit strange (e.g. something like the avatar / username / email address that seem just slightly off). Posted content that isn’t obviously spam, but doesn’t feel entirely right either. Nowadays that often means that it feels like it’s from GPT, of course.
Our approach is usually to give members the benefit of the doubt, and we won’t take action against the account until we’re confident that it’s the right thing to do. Before doing that, steps we might take include checking the IP or seeing if we can identify the person as a customer or on LinkedIn.
If it turns out to be a real person with no nefarious intention, but that is posting a lot of content that is very obviously AI-generated, then to your point, that raises an interesting question about what our policy or strategy should be. At this point I’d probably just open up a DM and have a conversation with them. Build a relationship and try to understand what their situation is and what they’re trying to achieve. And at some point, I think we may indeed need to have a formal policy and something in our terms.
Thanks @Kenneth R for your thoughts and hints.
At this point we can investigate the member and build relationship, because number of this kind of posts is small. We´ll see when this will be actual problem and we have to make decisions “in a hurry”. As you said, we have to be sure.
Situation would be much easier if the post is spam, scam, or nefarious, violates our House of rules or local laws. But if the post is ok in every way, it is just created using purely GPT.
We found this pretty neat idea:
Yes that is a smart idea! @Ditte is smart. :)
We get A LOT of this type of content in our community and honestly - I just kick it out and usually ban the user. The times we let it go, the second or third post is almost always spam. Our community is a bit faster moving with less eyes on it at times, so we’re stricter than not.
I will say that we’ve unfortunately wildly messed up our spam detector because of AI spam and I’ve got to find out if there is some way to reset it so we can re-teach it.
What would really help is that idea above and also being able to sort of quarantine or watch list these suspicious users like I shared in this idea here:
We get A LOT of this type of content in our community and honestly - I just kick it out and usually ban the user. The times we let it go, the second or third post is almost always spam. Our community is a bit faster moving with less eyes on it at times, so we’re stricter than not.
Oh no, sorry to hear that this is already actual and constant problem for you.
I totally understand that you kick them immediatelly.
I wonder, are the any cases that you have kicked out “normal” user and user asked you what’s going on?
What would really help is that idea above and also being able to sort of quarantine or watch list these suspicious users like I shared in this idea here:
Yeah you got my vote.
@revote So far, not yet. If I’ve got any doubts, I’ll also check their email address in our system to see if they’re an active customer, check their Ehawk score, and then make an educated decision. If anyone ever does come to us and banning them was a mistake, we’ll unban and apologize profusely - but I suspect they’ll understand!
Can I ask what is Ehawk score?
Of course! Here’s a helpful page on Ehawk: https://www.ehawk.net/features/risk-score.php
The tl;dr is that it’s a score that basically measures any known risks associated with a user including the domain, known malicious behavior, problem IP addresses, geographical location, etc.
Ehawk rolls all that information up into one score with a negative score indicating a problem (like a stunning -41 I came across the other day). It’s not a perfect system at all, especially since you could have a perfectly well meaning user working from a questionably location with a problem IP but it’s a tool meant to help make educated decisions.
Our trust and safety team proactively pulls this score for every single new customer that signs up and therefore I’ve got access to a database through an internally built tool. I actually just spun up an extra conversation with them looking into running all of our community users through this tool proactively so we can take action before people are a problem.
Really nice tool @jillian.bejtlich Yeah, checking the IP is problematic, not least because they usually are dynamic but as you said - tool helps to make educated decisions.
Thanks for sharing the tip!