• @jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    45
    edit-2
    3 months ago

    That is one bullshit headline. Forbes keeping the AI pump and dump scheme going.

    TLDR: People correctly discerned that written responses were from an “AI” chatbot slightly less often than they correctly discerned that responses were from a psychotherapist.

    “AI” cannot replace a therapist and hasn’t “won” squat.

    • @asap@lemmy.world
      link
      fedilink
      English
      -7
      edit-2
      3 months ago

      A bit disingenuous not to mention this part:

      Further, participants in most cases preferred ChatGPT’s take on the matter at hand. That was based on five factors: whether the response understood the speaker, showed empathy, was appropriate for the therapy setting, was relevant for various cultural backgrounds, and was something a good therapist would say.

      • @PapstJL4U@lemmy.world
        link
        fedilink
        English
        113 months ago

        Patients explaining they liked what they heared - not if it is correct or relevant to the cause. There is not even a pipeline for escalation, because AIs don’t think.

          • @asap@lemmy.world
            link
            fedilink
            English
            -4
            edit-2
            3 months ago

            You can’t say “Exactly” when you tl;dr’d and removed one of the most important parts of the article.

            Your human summary was literally worse than AI 🤦

            I’m getting downvoted, which makes me suspect people think I’m cheerleading for AI. I’m not. I’m sure it sucks compared to a therapist. I’m just saying that the tl;dr also sucked.

  • @cyrano@lemmy.dbzer0.comOP
    link
    fedilink
    English
    13 months ago

    From the study

    Using different measures, we then confirmed that responses written by ChatGPT were rated higher than the therapist’s responses suggesting these differences may be explained by part-of-speech and response sentiment. This may be an early indication that ChatGPT has the potential to improve psychotherapeutic processes. We anticipate that this work may lead to the development of different methods of testing and creating psychotherapeutic interventions. Further, we discuss limitations (including the lack of the therapeutic context), and how continued research in this area may lead to improved efficacy of psychotherapeutic interventions allowing such interventions to be placed in the hands of individuals who need them the most.