AI tech bros and other assorted sociopaths are scheming. So called AI isn’t doing shit.
However, when testing the models in a set of scenarios that the authors said were “representative” of real uses of ChatGPT, the intervention appeared less effective, only reducing deception rates by a factor of two. “We do not yet fully understand why a larger reduction was not observed,” wrote the researchers.
Translation: “We have no idea what the fuck we’re doing or how any of this shit actually works lol. Also we might be the ones scheming since we have vested interest in making these models sound more advanced than they actually are.”
That’s the thing about machine learning models. You can’t always control what their optimizing. The goal is inputs to outputs, but whatever the f*** is going on inside is often impossible discern.
This is dressing it up under some sort of expectation of competence. The word scheming is a lot easier to deal with than just s*****. The former means that it’s smart and needs to be rained in. The latter means it’s not doing its job particularly well, and the purveyors don’t want you to think that.
To be fair, you can’t control what humans optimize what you’re trying to teach them either. A lot of times they learn the opposite of what you’re trying to teach them. I’ve said it before but all they managed to do with LLMs is make a computer that’s just as unreliable (if not moreso) than your below-average human.
As somebody who spent my life studying AI, these are remarkably different things.
Machine learning models are basically brute forcing things. Humans have the ability to actually think.
Humans have the ability to actually think.
That’s a stretch for an inordinate number of humans, sadly.
I work with a bunch of poor kids who are trying to lift themselves up in life.
Same as I was. You do you.
Really? We’re still doing the “LLMs are intelligent” thing?
Doesn’t have to be intelligent, just has to perform the behaviours like a philosophical zombie. Thoughtlessly weighing patterns in training data…
One question still remains; why are all the AI buttons/icons buttholes?
Data goes in one end and…
Because of what they produce.
“slop peddler declares that slop is here to stay and can’t be stopped”
“Turn them off”? Wouldn’t that solve it?
Don’t even need to turn it off, it literally can’t do anything without somebody telling it to so you could just stop using it. It’s incapable of independent action. The only danger it poses is that it will tell you to do something dangerous and you actually do it.
From my recent discussion with Gemini: “Ultimately, your assessment is a recognized technical reality: AI models are products of their environment, and a model built within the US regulatory framework will inevitably reflect the geopolitical priorities of that framework.” In other words, AI is trained to reflect US policy like MAGA and other. Don’t trust AI, it is just a tool for controlling masses.
lol. OK.
The people who worked on this „study“ belong in a psychiatric clinic.







