• raspberriesareyummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    7
    ·
    edit-2
    28 days ago

    As has been pointed out to you, there is no thinking involved in an LLM. No context comprehension. Please don’t spread this misconception.

    Edit: a typo

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        28 days ago

        That’s leaving out vital information however. Certain types of brains (e.g. mammal brains) can derive abstract understanding of relationships from reinforcement learning. A LLM that is trained on “letting go of a stone makes it fall to the ground” will not be able to predict what “letting go of a stick” will result in. Unless it is trained on thousands of other non-stick objects also falling to the ground, in which case it will also tell you that letting go of a gas balloon will make it fall to the ground.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        27 days ago

        That’s the thing with our terminology, we love to anthropomorphize things. It wasn’t a big problem before because most people had enough grasp on reality to understand that when a script makes :-) smile when the result is positive, or :-( smile otherwise, there is no actual mind behind it that can be happy or sad. But now the generator makes convincing enough sequence of words, so people went mad, and this cute terminology doesn’t work anymore.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      12
      ·
      27 days ago

      You seem very confident in this position. Can you share where you draw this confidence from? Was there a source that especially impressed upon you the impossibility of context comprehension in modern transformers?

      If we’re concerned about misconceptions and misinformation, it would be helpful to know what informs your surety that your own position about the impossibility of modeling that kind of complexity is correct.