• @oakey66@lemmy.world
    link
    fedilink
    English
    1383 months ago

    AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There’s no thought or reasoning. They don’t understand inputs. They mimic human speech. They’re not presenting anything meaningful.

    • @raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      383 months ago

      I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.

      • @SinningStromgald@lemmy.world
        link
        fedilink
        English
        153 months ago

        There are at least three of us.

        I am worried what happens when the bubble finally pops because shit always rolls downhill and most of us are at the bottom of the hill.

        • @raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          133 months ago

          Not sure if we need that particular bubble to pop for us to be drowned in a sea of shit, looking at the state of the world right now :( But silicon valley seems to be at the core of this clusterfuck, as if all the villains form there or flock there…

    • @Jesus_666@lemmy.world
      link
      fedilink
      English
      173 months ago

      That undersells them slightly.

      LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.

      LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.

      They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.

    • @biggerbogboy@sh.itjust.works
      link
      fedilink
      English
      7
      edit-2
      3 months ago

      My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.

      And it’s funny and sad how some people think these LLMs are their friends, like no, it’s a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.