- cross-posted to:
- reddit@lemmy.world
- technology@lemmy.world
- cross-posted to:
- reddit@lemmy.world
- technology@lemmy.world
2 steps away from thought crimes
What a joke
The only subreddit I’ve been visiting is LeopardsAteMyFace and I got a warning. How is it inciting violence if it’s ALREADY happened?
Because an AI indiscriminately saw your comment and looked for keywords and just issued a warning, that’s what they are not telling to people. I had the same thing on another sub, except the mods also were involved so it was a ban, reddit rules are vague ASF
I mean when everyone else is jettisoning moderation, reddit is cracking down on bots and trolls? I don’t hate it.
Sure, but isn’t Reddit the one who gets to choose what counts as bannable?
Their ai detects a ban in your other accounts it decides to ban all your “connected accts” even if you haven’t used that acct for years
The thing is, these recent ban waves they have been going after the low hanging fruits. Of accounts, small advertisers, and not the problematic ones like the state sponsored political troll accounts, at least not in large numbers, both by RU and the US, but we know they represent a large amount of traffic on the site. Many articles posted on Reddit was pointed out as being a bot too.
This was the end for me after 12 years. Insanity.
Honestly I wouldn’t be surprised if this started happening at Lemmy too. Its a lot easier to control what kind of content is on a platform when you do something like this.
Now, I don’t particularly think this is a good idea, but I can see the benefit of this as well. People have the freedom to upvote whatever they choose, even if I think they are dumb for doing it, and they shouldn’t have to worry about anyone other than law enforcement or lawyers (in extreme edge cases) using that information against them.
One thing I like about lemmy is you can still upvote ‘removed by moderator’ comments and I always do because it’s funny
and you can view the comment in the modlog.