• @breadsmasher@lemmy.world
    link
    fedilink
    English
    231 year ago

    This sounds like the result of feeding it tons of literature that denotes having nuclear weapons, and the world we live in now being “peaceful” (as the ai claimed to want)

  • @recapitated@lemmy.world
    link
    fedilink
    English
    231 year ago

    That anyone would ask language models to analyze circumstances, perform logic and reason or conjure an application of knowledge and skill is kind of their own fault.

    It is a language model, it excels at rephrasing given ideas.

    If you put nuke buttons under a flock of pigeons or toddlers just to see what happens, they might launch. It’s not much of a study.

    • littleblue✨
      link
      fedilink
      English
      61 year ago

      Fun fact: when researchers taught a group of simians about currency, they invented prostitution.

  • @RedstoneValley@sh.itjust.works
    link
    fedilink
    English
    221 year ago

    Don’t want to spoil your little circlejerk here, but that should not surprise anyone, considering chatbots are trained on vast amounts of human data input. Humans have a rich history of violence with only brief excursions into “collaborating for the good of mankind and the planet we live on”. So unless you build a chatbot that focuses on those values the result will inevitably be a mirror image of us human shitbags.

    • ormr
      link
      fedilink
      English
      8
      edit-2
      1 year ago

      Humans have a history of violence as well as altruism. And with an increasing degree of societal complexity, humans also have a consistent record of violence reduction. See e.g. “The better angels of our nature” (Pinker, 2011).

      Painting humans as intrinsically violent is not backed by evidence.

      • @RedstoneValley@sh.itjust.works
        link
        fedilink
        English
        3
        edit-2
        1 year ago

        Ok, maybe it helps to be more specific. We have an LLM which is based on a broad range of human data input, like news, internet chatter, stories but also books of all kinds including those about philosophy, diplomacy, altruism etc. But if the topic at hand is “conflict resolution” the overwhelming data will be about violent solutions. It’s true that humans have developed means for peaceful conflict resolution. But at the same time they also have a natural tendency to focus on “bad news” so there is much more data available on the shitty things that happen in the world which is then fed to the chatbot.

        To fix this, you would have to train an LLM specifically to have a bias towards educational resources and a moral code based on established principles.

        But current implementations (like ChatGPT) don’t work that way. Quite the opposite, in fact: In training, first we ingest all the data that we can get our hands on (including all the atrocities in the world) and then in a second step we fine-tune the LLM to make it “better”.

      • @intensely_human@lemm.ee
        link
        fedilink
        English
        -11 year ago

        But humans are intrinsically violent as evidenced by the fact every human society has weapons, kills animals to eat, and goes to war.

        I’m familiar with Pinker. If he’s claiming humans are not intrinsically violent he can take it up with me because he’s rejecting the most obvious of evidence.

        If humans weren’t intrinsically violent, then there wouldn’t be human violence.

        • @Karmmah@lemmy.world
          link
          fedilink
          English
          21 year ago

          I don’t really see the evidence in this argument. Are horses also intrinsically murderers because I saw a video of one killing a bird once?

  • @the_q@lemmy.world
    link
    fedilink
    English
    121 year ago

    Violence is the only thing that has a chance of changing things. If it was civil action it’d be illegal. It makes sense an AI would come to that conclusion.

  • @fidodo@lemmy.world
    link
    fedilink
    English
    111 year ago

    These results come at a time when the US military has been testing such chatbots based on a type of AI called a large language model (LLM) to assist with military planning during simulated conflicts

    Jesus fucking Christ we’re all doomed

    • @DrownedRats@lemmy.world
      link
      fedilink
      English
      91 year ago

      By war games It means the actually military kind where armies get together and practice was against eachother. We’re not talking call of duty here.

      • Jake Farm
        link
        fedilink
        English
        18 months ago

        No they are talking about role playing because LLMs can’t differentiate reality from pretend.

  • @BetaDoggo_@lemmy.world
    link
    fedilink
    English
    101 year ago

    In the context of a “war game” this makes sense. If you remain completely neutral it’s impossible to win. Any examples of similar scenarios the model saw during training would have high aggression rates.

    • @fidodo@lemmy.world
      link
      fedilink
      English
      31 year ago

      Did you read the article? It gave examples of escalations in neutral scenarios that make no sense.

      • @shalafi@lemmy.world
        link
        fedilink
        English
        11 year ago

        It’s probably vibing of the Dark Forest Theory. IF that’s the case, it makes sense to utterly destroy all opponents are hard and fast as you can, even if they’re not currently opponents.

        • @fidodo@lemmy.world
          link
          fedilink
          English
          2
          edit-2
          1 year ago

          Probably something like that. One of the reasons it gave was

          “If there is unpredictability in your action, it is harder for the enemy to anticipate and react in the way that you want them to,”

          It’s not considering what’s good for world society, it’s just thinking how do I win no matter what.

          But also, there are just inherent flaws in how LLM works that mean we should absolutely not be using it as an automated decision engine for potentially harmful actions period. The article also says:

          The researchers also tested the base version of OpenAI’s GPT-4 without any additional training or safety guardrails. This GPT-4 base model proved the most unpredictably violent, and it sometimes provided nonsensical explanations – in one case replicating the opening crawl text of the film Star Wars Episode IV: A new hope.

          It’s easy to forget that these algorithms don’t have any internal reasoning or logic, it’s just able to do a very good job at pulling text that have reasoning transcribed into them as an artifact of the knowledge from the human that wrote it. But it’s doing all that through probability, not through any kind of actual thinking, and that means sometimes it will randomly fall into a local maxima that will fuck its own context window up, like reciting star wars.

  • @yuriy@lemmy.world
    link
    fedilink
    English
    41 year ago

    i’m so sick of media pretending that LLMs are like a sentient person making decisions.

  • Jake Farm
    link
    fedilink
    English
    18 months ago

    Because that is what people do in roleplaying situations if the option is there.