Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    112
    arrow-down
    7
    ·
    1 month ago

    That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.

    • DaddleDew@lemmy.world
      link
      fedilink
      English
      arrow-up
      96
      arrow-down
      2
      ·
      edit-2
      1 month ago

      I’d say it’s more akin to a bread company saying that it is a violation of the terms and services to get sick from food poisoning after eating their bread.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        1 month ago

        Yes you are right, it’s hard to find an analogy that is both as stupid and also sounds somewhat plausible.
        Because of course a bread company cannot reasonably claim that eating their bread is against terms of service. But that’s exactly the problem, because it’s the exact same for OpenAI, they cannot reasonably claim what they are claiming.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      That‘s a company claiming companies can‘t take responsibility because they are companies and can‘t do wrong. They use this kind of defense virtually every time they get criticized. AI ruined the app for you? Sorry but that‘s progress. We can‘t afford to lag behind. Oh you can’t afford rent and are about to become homeless? Sorry but we are legally required to make our shareholders happy. Oh your son died? He should‘ve read the TOS. Can‘t afford your meds? Sorry but number must go up.

      Companies are legally required to be incompatible with human society long term.

        • Whostosay@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          I can’t wrap my head around what I’m you’re saying, and that could be due to drinking. Op later also talked about not being the best metaphor

          • notgold@aussie.zone
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            1 month ago

            Metaphor isn’t perfect but it’s ok.

            The gun is a tool as is an LLM. The companies that make these tools have intended use cases for the tools.

  • ryper@lemmy.ca
    link
    fedilink
    English
    arrow-up
    85
    ·
    1 month ago

    “Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.

    How the fuck is OpenAI’s mission relevant to the case? Are suggesting that their mission is worth a few deaths?

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      1 month ago

      Tech Bros all think they are the saviors of humanity and they are owed every dollar they collect.

  • spongebue@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    1 month ago

    So why can’t this awesome AI be stopped from being used in ways that violate the TOS?

  • DominusOfMegadeus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 month ago

    The police also violated my Terms of Service when they arrested me for that armed bank robbery I was allegedly committing. This is a serious problem in our society people; something must be done!

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    1 month ago

    Well there you have it. It’s not the dev’s fault, it’s the AI’s fault. Just like they’d throw any other employee under the bus, even if it’s one they created.

  • buttnugget@lemmy.worldBanned
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 month ago

    A big part of the problem is that people think they’re talking to something intelligent that understands them and knows how many instances of letters words have.

  • cmbabul@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    10
    ·
    1 month ago

    Just going through this thread and blocking anyone defending OpenAI or AI in general, your opinions are trash and your breath smells like boot leather