Hey everyone, I wanted to start a conversation about something that’s been on my mind lately. I think we should consider implementing a new community guideline that prohibits content created by artificial intelligence tools.
I’ve noticed more and more posts that seem to be generated by AI rather than written by actual community members. This feels like it goes against the spirit of what our forum is supposed to be about - real people sharing genuine experiences and knowledge.
What do you all think about this? Should we have a rule that requires all content to be originally written by human users? I’m curious to hear different perspectives on this topic and whether others have noticed the same trend I have.
Been here three years and I think a complete ban goes too far. The real problem isn’t AI content - it’s people not disclosing when they use it and pretending it’s their personal experience. I’ve seen helpful AI-assisted posts that were upfront about it, especially for technical stuff or research summaries. But when someone uses AI to give relationship or medical advice while acting like it’s from their own life? That’s the issue. Instead of banning everything, let’s require disclosure. Make users clearly state when they used AI tools. We keep authentic human discussion while allowing AI as a research tool - as long as it’s properly labeled.
We’re attacking the wrong problem. I’ve watched teams at work ban tools instead of fixing broken processes.
AI posts aren’t the issue - low quality content and engagement farming are. I’ve seen channels flooded with generic responses that add nothing. Some AI-generated, some just lazy humans copy-pasting.
Focus on contribution quality instead. When someone shares a debugging solution that saves me 3 hours, I don’t care if they used AI to format it. But generic “have you tried restarting” replies to complex problems? Useless, whether human or AI wrote them.
Skip the detection game. Raise the participation bar - require detailed examples, encourage follow-ups, reward real discussion. Quality contributors stay, content farmers leave for easier targets.
I’ve moderated forums with 50k+ engineers. Communities that thrive care about meaningful contributions, not policing tools.
honestly don’t see the big deal here. if the content’s useful and follows forum rules, who cares if it’s AI or human? we’ve got upvote/downvote to filter out bad posts. seems like extra work for mods when they’re already swamped.
This reminds me of Wikipedia’s early days - everyone freaked out about online sources being unreliable, but good guidelines fixed that. I’ve moderated forums for five years, and blanket AI bans are a nightmare to enforce. How do you prove something’s AI-generated vs just badly written? We’d waste more time playing detective than actually moderating. The real problem isn’t occasional AI help - it’s spam flooding. Posting limits or karma requirements would handle the volume without trying to catch every AI post.