- cross-posted to:
- technology@beehaw.org
- kemper_loves_you@lemmy.dbzer0.com
- cross-posted to:
- technology@beehaw.org
- kemper_loves_you@lemmy.dbzer0.com
cross-posted from: https://lemmy.dbzer0.com/post/43566349
cross-posted from: https://lemmy.dbzer0.com/post/43566349
From the article (emphasis mine):
From elsewhere:
Sycophancy in GPT-4o: What happened and what we’re doing about it
I don’t know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.
Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.
What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let’s not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.
If you find yourself in weird corners of the internet, schizo-posters and “spiritual” people generate staggering amounts of text
They train it on basically the whole internet. They try to filter it a bit, but I guess not well enough. It’s not that they intentionally trained it in religious texts, just that they didn’t think to remove religious texts from the training data.