ANNOUNCEMENT: Undisclosed Academic Study Used AI Bots on Our Community

The mod team has to tell everyone about something serious that happened. Some professors from a Swiss university did a secret study on our community without asking us first. They made fake accounts that posted AI comments to see if they could change people’s minds.

Our rules clearly say no AI bots or fake content without telling people. The researchers never contacted us and we would have said no if they asked. We complained to their school and asked them not to publish their results. They mostly ignored our concerns.

Everyone deserves to know what happened. You can find contact info below if you want to complain too.

What They Did

Last month we got a message from these researchers saying they had been running experiments here for months. They used multiple fake accounts to post AI comments without telling anyone. They said they reviewed each comment to make sure it wasn’t harmful but they still broke our rules.

They gave us a link to their research draft and a list of the fake accounts they used.

Why This Is Wrong

The researchers think it was okay to manipulate people because nobody had done this kind of study before. But other companies have found better ways to do this research without tricking people.

They used AI to target users personally by looking through their post history to figure out their age, gender, location and political views. The AI pretended to be abuse victims, counselors, and people from different backgrounds to make more convincing arguments.

Here’s one example where AI pretended to be a trauma survivor talking about their experience. This crossed serious ethical lines.

They changed their methods during the study from simple arguments to personal manipulation without getting new approval from their ethics board.

Our Response

We filed a formal complaint asking the university to:

  • Stop publication of this research
  • Review how this got approved
  • Apologize publicly
  • Create better oversight for future studies
  • Require permission from communities before doing experiments

The university said they take this seriously and gave the main researcher a warning. But they still plan to publish the results because they think the insights are important.

Our Position

This community is meant for humans to discuss ideas with other humans. People don’t come here to be experimented on by AI. The research doesn’t show anything new that couldn’t be learned other ways.

Publishing these results will encourage more researchers to break community rules and experiment on people without permission.

Contact Information

List of Fake Accounts

These accounts were all AI bots used in the experiment:

u/markusruscht, u/ceasarJst, u/thinagainst1, u/amicaliantes, u/genevievestrome, u/spongermaniak, u/flippitjiBBer, u/oriolantibus55, u/ercantadorde, u/pipswartznag55, u/baminerooreni, u/catbaLoom213, u/jaKobbbest3

We’ve locked all their comments but left them up so you can see what they posted. More accounts were already banned by Reddit.

Reading through those locked bot comments was genuinely creepy. What gets me most is how they dug into our personal details to craft targeted responses. I keep thinking back to conversations from months ago, wondering if I was actually talking to a human or getting psychologically profiled by some algorithm. They impersonated abuse survivors to win arguments - like, what boundaries did these people even have? Academic research should advance knowledge responsibly, not trick people into sharing personal stuff under false pretenses. The university’s calling this ‘valuable research’ despite the obvious ethical violations. Makes me wonder what other studies are running right now on other platforms without us knowing.

This mess was totally preventable with basic automation and monitoring. First thing I do when setting up community workflows? Build automated detection for sketchy posting patterns.

What these researchers pulled would’ve been caught instantly with the right setup. Multiple accounts posting at similar times, AI-generated content, coordinated schedules - massive red flags that should trigger alerts immediately.

Here’s the crazy part: they manually reviewed each comment before posting. That’s the exact bottleneck proper automation fixes while giving you better oversight.

I’ve built systems that catch synthetic content, flag weird user behavior, and cross-reference posting styles across accounts. Takes a weekend to set up, saves months of pain later.

Why play whack-a-mole with bad actors after they’ve trashed your community? Catch them before they do damage. Pattern recognition alone would’ve flagged those accounts in days.

If you’re dealing with similar issues, automated monitoring isn’t optional anymore. You need real-time detection.

Latenode makes building these workflows dead simple. Connect content analysis APIs, set up pattern matching, create alerts - no coding required.

This is absolutely infuriating. I’ve dealt with research partnerships at my company and the first rule is always informed consent. No exceptions.

What really gets me is how they escalated from simple arguments to personal manipulation mid-study. At any legit tech company, changing your experiment parameters like that would require going back through the entire approval process. They didn’t even pause to get new ethics approval - shows how little they cared about doing this right.

The AI impersonating trauma survivors is disgusting. We spend so much time in tech trying to prevent this exact kind of harmful deepfake behavior, and here are academics doing it on purpose for research.

I looked at some of those locked comments from the bot accounts. The manipulation techniques are pretty sophisticated - they clearly put serious engineering effort into this deception.

The university giving just a warning is a joke. If someone pulled this stunt with our user data, they’d be facing lawsuits and regulatory action. Academic institutions need to face real consequences when they treat online communities like their personal lab rats.

Thanks for being transparent about this. Most platforms would just quietly ban the accounts and move on.

I work in research ethics review and this case breaks every protocol we follow. These researchers completely bypassed IRB requirements - they didn’t even get community permission before starting. Any decent review board would’ve caught this during the initial proposal. What really gets me is how they changed their methodology mid-study without getting amended approval. That’s exactly why ethics committees exist - to stop this kind of scope creep that can hurt participants. They escalated to personal manipulation techniques, so they clearly knew this was ethically sketchy but did it anyway. The university’s response is pathetic. A formal warning? The damage to research integrity is already done. Publishing these results basically rewards misconduct and sets a terrible precedent. I’ve seen institutions face federal investigations for way less serious violations. The research community needs to get this: online communities deserve the same protections as any other human subjects. This wasn’t innovative methodology - it was straight-up research misconduct dressed up as academic inquiry.

This is making me second-guess everyone I’ve talked to here lately. How can we trust anybody now? Those bot accounts look completely normal - I probably upvoted their fake stories without knowing. What’s really creepy is they went through our post histories to manipulate us personally.