When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

  • TheTechnician27@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    27 days ago

    Yada yada here’s the open-access paper.

    (I usually provide these links neutrally, but I’ll make a point here: in a public health community, it may be worth requiring linking to a paper on top of the news article covering it – especially if it’s open-access. Ars here is mercifully concerned with methodology; many outlets don’t give a shit.)

    Conclusion is as follows (for expedience; I encourage reading other parts):

    As AI becomes ubiquitous in society, understanding how it reshapes human thought is essential. Tri-System Theory [author’s note: introduced in this paper; tenuous to call it a “theory” on that basis] offers a new framework for this cognitive frontier. By introducing System 3 (Artificial) as a distinct and external reasoning process, we move beyond the classical architecture of dual-process theories and chart a new decision-making paradigm: one where intuition, deliberation, and artificial cognition coexist, compete, or converge. We show that people not only use System 3 to assist with reasoning, but often surrender to its outputs whether correct or flawed. This cognitive surrender illustrates the value and integration of System 3, but also highlights the vulnerability of System 3 usage. Similar to how System 1-driven heuristics lead to systematic biases, System 3 has differential cognitive shortcomings that will challenge decision-makers and society at large.

    Tri-System Theory is not a warning about AI’s dangers but a recognition of System 3’s psychological presence. We do not merely use AI; we think with it. [author’s note] In doing so, we must ask new questions: What happens when our judgments are shaped by minds not our own? What becomes of intuition and effort when a generative, artificial partner stands ready to answer? How do we preserve agency, reflection, and autonomy in a world where users engage in cognitive surrender? We offer Tri-System Theory as a conceptual foundation for understanding these challenges. It is a theory for an age of human-AI algorithmic cognition, and for the decision-makers, researchers, and designers shaping that future

    • Tiresia@slrpnk.net
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      27 days ago

      I think this paper is overly exoticizing AI. People have always been externalizing deliberation to others, be they parents, friends, bosses, partners, gods, spirits, journalists, advertisers, superstitions, tarot cards, or rubber ducks.

      Perhaps it is worth calling all of these “system 3”, but I see no reason to separate LLMs from them. Our judgment has never been our own entirely, and even if there is nobody else to defer to we can defer to “what they would do”.

      We accept that these external sources are flawed and can give us bad advice that we follow, but we keep listening as long as we think that is made up for by good advice or other factors.

    • mfed1122@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      26 days ago

      Yuck. This petty observation is unworthy of being called System 3. Stealing valor from Kahneman and Tversky. Keep their terminology out of your mouths, trend chasers

    • Mothra@mander.xyz
      link
      fedilink
      arrow-up
      2
      ·
      26 days ago

      Maybe… But I guess so does branding of many sorts. People rarely question the efficiency and/or safety (or the moral integrity in the manufacturing process) of a lot of products. Foods, cosmetics and medicines would be the first categories that spring to mind which are regularly abused and misused by population at large.

      So yes my point being perhaps religion has been doing this for centuries but it’s not like there wasn’t any other case

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    5
    ·
    26 days ago

    I’m seeing this, even in intelligent people. They expect they can just keep prompting and reach a 100% correct answer that needs no human verification. Looks like an earlier phase of AI Psychosis to me.

  • Berengaria_of_Navarre@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    27 days ago

    The whole point of ai is to train people out of critical thinking. It started with shitting all over / underfunding the arts, then turning schools into employee training camps and now, to remove any last residue of free thought, ai to fill the gaps.

  • okwhateverdude@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    27 days ago

    I dunno, I find this entirely unsurprising. And I bet this also correlates strongly with political identity too: authoritarians love gullible idiots that vote for them. Humanity is fucking stupid in aggregate

  • BeMoreCareful@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    27 days ago

    Oh, so we’ve amalgamated all the Facebook conspiracy theories with 4chan conspiracy theories, along with whatever % garbage political messagin,g everyone’s major religious texts, and basically the sum off all art, knowledge, and advertising that was available on the Internet at the time.

    Of I were real honest, I’d say that last one is the one that really bothers me. The vast majority of our modern media is dumb ads. Really, since the Victorian era. My unscientific guess is that the bulk of modern media is designed to weedle past your logic to make your emotions want to buy various petroleum products.

    And out off all that mess we’re expecting what?

  • floofloof@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    24 days ago

    In general, “fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation,” they write.

    People have always conflated confidence with ability and knowledge. That’s why so many positions of power are occupied by confident bullshitters. It seems like that tendency transfers over to people’s interactions with LLMs.

    It would be interesting to experiment with an LLM trained to sound less confident and more tentative or self-deprecatory. Maybe the results would be different.

  • Matriks404@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    26 days ago

    So like with any other tech or tool, there are two groups of people… Tale old as the world.

  • Paragone@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    3
    ·
    27 days ago

    This makes clearer that critical-thinking can’t be a mere add-on, it has to be the bedrock of people’s capability, ideally of 4/5ths of the population as a minimum-standard, XOR social-highjacking by ideologies & AI is certain.

    Unless someone’s identity is including critical-thinking, then they’re prone to cognitive abdication, either to ideology or to LLM, both are the same fundamental abdication.

    I didn’t know that LLMs were gaining the same abdication that ideologies have been gaining, but … now that that paper mentions some evidence, it looks clear.

    ( “religions” are another highjacker of minds… all religions which displace critical-thinking do the same thing.

    For anybody who claims that religion always displaces critical-thinking, you’ve obviously no experience with Vajrayana style ruthlessly-correct reasoning.

    I say Vajrayana style, but it could well be pervasive throughout the different south Asian branches of AwakeSoulism/Buddhism: I don’t know.

    Western philosophy is mushy-as-hell, by Vajrayana’s standards for objectivity & correctness )

    _ /\ _