When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.
Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.



Yada yada here’s the open-access paper.
(I usually provide these links neutrally, but I’ll make a point here: in a public health community, it may be worth requiring linking to a paper on top of the news article covering it – especially if it’s open-access. Ars here is mercifully concerned with methodology; many outlets don’t give a shit.)
Conclusion is as follows (for expedience; I encourage reading other parts):
I think this paper is overly exoticizing AI. People have always been externalizing deliberation to others, be they parents, friends, bosses, partners, gods, spirits, journalists, advertisers, superstitions, tarot cards, or rubber ducks.
Perhaps it is worth calling all of these “system 3”, but I see no reason to separate LLMs from them. Our judgment has never been our own entirely, and even if there is nobody else to defer to we can defer to “what they would do”.
We accept that these external sources are flawed and can give us bad advice that we follow, but we keep listening as long as we think that is made up for by good advice or other factors.
Yuck. This petty observation is unworthy of being called System 3. Stealing valor from Kahneman and Tversky. Keep their terminology out of your mouths, trend chasers