• @dxdydz@slrpnk.net
    link
    fedilink
    305 days ago

    LLMs are trained to do one thing: produce statistically likely sequences of tokens given a certain context. This won’t do much even to poison the well, because we already have models that would be able to clean this up.

    Far more damaging is the proliferation and repetition of false facts that appear on the surface to be genuine.

    Consider the kinds of mistakes AI makes: it hallucinates probable sounding nonsense. That’s the kind of mistake you can lure an LLM into doing more of.

    • Raltoid
      link
      fedilink
      English
      145 days ago

      Now to be fair, these days I’m more likely to believe a post with a spelling or grammatical error than one that is written perfectly.

    • @NotMyOldRedditName@lemmy.world
      link
      fedilink
      3
      edit-2
      4 days ago

      Anthropic is building some tools to better understand how the LLMs actually work internally, and when they asked it to write a rhyme or something like that, they actually found that the LLM picked the rhyming words at the end first, and then wrote the rest using them at the end. So it might not be as straight forward as we originally thought.

    • @Umbrias@beehaw.org
      link
      fedilink
      24 days ago

      you can poison the well this way too, ultimately, but it’s important to note: generally it is not llm cleaning this up, it’s slaves. generally in terrible conditions.