• @oakey66@lemmy.world
    link
    fedilink
    English
    1383 months ago

    AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There’s no thought or reasoning. They don’t understand inputs. They mimic human speech. They’re not presenting anything meaningful.

    • @raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      383 months ago

      I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.

      • @SinningStromgald@lemmy.world
        link
        fedilink
        English
        153 months ago

        There are at least three of us.

        I am worried what happens when the bubble finally pops because shit always rolls downhill and most of us are at the bottom of the hill.

        • @raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          133 months ago

          Not sure if we need that particular bubble to pop for us to be drowned in a sea of shit, looking at the state of the world right now :( But silicon valley seems to be at the core of this clusterfuck, as if all the villains form there or flock there…

    • @Jesus_666@lemmy.world
      link
      fedilink
      English
      173 months ago

      That undersells them slightly.

      LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.

      LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.

      They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.

    • @biggerbogboy@sh.itjust.works
      link
      fedilink
      English
      7
      edit-2
      3 months ago

      My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.

      And it’s funny and sad how some people think these LLMs are their friends, like no, it’s a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.

  • 3DMVR
    link
    fedilink
    English
    513 months ago

    If it can be reached in 60 hour work weeks it can be reached in 40, but nah mfs should rush to get themselves replaced

    • @Lost_My_Mind@lemmy.world
      link
      fedilink
      English
      203 months ago

      If it can be reached in 60 hour work weeks, then it can be reached in 40 hour work weeks be hiring a second person.

      If management isn’t willing to put the effort in of hiring the required staff, why would I want to work the job of 2 people for 1 persons pay?

      • @frezik@midwest.social
        link
        fedilink
        English
        63 months ago

        IT project management doesn’t work that way, but it doesn’t matter much. 60 hour work weeks wouldn’t help, either.

  • @SkunkWorkz@lemmy.world
    link
    fedilink
    English
    503 months ago

    lol no way AGI is within reach. He is just trying to hype investors. Bet he has a scheduled stock sale soon.

  • Sibbo
    link
    fedilink
    English
    433 months ago

    This is almost too depressing to be funny.

  • @dan1101@lemm.ee
    link
    fedilink
    English
    423 months ago

    Let’s work 20 hour weeks then. Who wants AGI other than war pigs and billionaires?

  • billwashere
    link
    fedilink
    English
    423 months ago

    I’m really getting sick and tired of these rich fuckers saying shit like this.

    1. we are no where close to AGI given this current technology.

    2. working 50% longer is not going to make a bit of difference for AGI

    3. and even if it would matter, hire 50% more people

    The only thing this is going to accomplish is likely make him wealthier. So fuck him.

    • @graphene@lemm.ee
      link
      fedilink
      English
      6
      edit-2
      3 months ago

      Increasing working hours decreases actual labor done per hour. A person working 40 hours per week will more often than not achieve more than someone working 70.


      “in Britain during the First World War, there had been a munitions factory that made people work seven days a week. When they cut back to six days, they found, the factory produced more overall.”

      “In 1920s Britain, W. G. Kellogg—the manufacturer of cereals—cut his staff from an eight-hour day to a six-hour day, and workplace accidents (a good measure of attention) fell by 41 percent. In 2019 in Japan, Microsoft moved to a four-day week, and they reported a 40 percent improvement in productivity. In Gothenberg in Sweden around the same time, a care home for elderly people went from an eight-hour day to a six-hour day with no loss of pay, and as a result, their workers slept more, experienced less stress, and took less time off sick. In the same city, Toyota cut two hours per day off the workweek, and it turned out their mechanics produced 114 percent of what they had before, and profits went up by 25 percent. All this suggests that when people work less, their focus significantly improves. Andrew told me we have to take on the logic that more work is always better work. “There’s a time for work, and there’s a time for not having work,” he said, but today, for most people, “the problem is that we don’t have time. Time, and reflection, and a bit of rest to help us make better decisions. So, just by creating that opportunity, the quality of what I do, of what the staff does, improves.””

      • Hari, J. (2022). Stolen Focus: Why You Can’t Pay Attention–and How to Think Deeply Again. Crown.

      In 1920s Britain, W. G. Kellogg: A. Coote et al., The Case for a Four Day Week (London: Polity, 2021), 6.

      In 2019 in Japan, Microsoft moved to a four-day week: K. Paul, “Microsoft Japan Tested a Four-Day Work Week and Productivity Jumped by 40%,” Guardian, November 4, 2019; and Coote et al., Case for a Four Day Week, 89.

      In Gothenberg in Sweden around the same time: Coote et al., Case for a Four Day Week, 68–71.

      In the same city, Toyota cut two hours per: day: Ibid., 17–18.


      The real point of increasing working hours is to make your job consume your life.

    • JackFrostNCola
      link
      fedilink
      English
      2
      edit-2
      3 months ago

      Or option 4) stay as you are and you will just acheive it in due time rather than in a 50% shorter timeframe?
      Edit: 25% shorter? I dont know, maths isnt my strong suit and im drunk.

      • billwashere
        link
        fedilink
        English
        23 months ago

        They are very impressive to where we were 20 years ago, hell even 5 years ago. The first time I played with ChatGPT I was absolutely floored. But after playing with a lot of them, even training a few RAGs (Retrieval-Augmented Generation), we aren’t really that close and in my opinion this is not a useful path towards a true AGI. Don’t get me wrong, this tool is extremely useful and to most people, they’d likely pass a basic Turing Test. But LLMs are sophisticated pattern recognition systems trained on vast amounts of text data that predict the most likely next word or token in a sequence. That’s really all they do. They are really good at predicting the next word. While they demonstrate impressive language capabilities, they lack several fundamental components necessary for an AGI: -no true understanding -they can’t really engage in the real world. -they have no real ability to learn real-time. -they don’t really have the ability to take in more then one type of info at a time.

        I mean the simplest way in my opinion to explain the difference is you will never have an LLM just come up with something on its own. It’s always just a response to a prompt.

        • @helopigs@lemmy.world
          link
          fedilink
          English
          13 months ago

          Sorry for the late reply - work is consuming everything :)

          I suspect that we are (like LLMs) mostly “sophisticated pattern recognition systems trained on vast amounts of data.”

          Considering the claim that LLMs have “no true understanding”, I think there isn’t a definition of “true understanding” that would cleanly separate humans and LLMs. It seems clear that LLMs are able to extract the information contained within language, and use that information to answer questions and inform decisions (with adequately tooled agents). I think that acquiring and using information is what’s relevant, and that’s solved.

          Engaging with the real world is mostly a matter of tooling. Real-time learning and more comprehensive multi-modal architectures are just iterations on current systems.

          I think it’s quite relevant that the Turing Test has essentially been passed by machines. It’s our instinct to gatekeep intellect, moving the goalposts as they’re passed in order to affirm our relevance and worth, but LLMs have our intellectual essence, and will continue to improve rapidly while we stagnate.

          There is still progress to be made before we’re obsolete, but I think it will be just a few years, and then it’s just a question of cost efficiency.

          Anyways, we’ll see! Thanks for the thoughtful reply

  • @7rokhym@lemmy.ca
    link
    fedilink
    English
    403 months ago

    Thought this was an Onion article!

    Hey plebs! I demand you work 50% more to develop AGI so that I can replace you with robots and fire all of you and make myself a double plus plutocrat! Also, I want to buy an island, small city, Bunker, Spaceship, And/Or something.

  • @jordanlund@lemmy.world
    link
    fedilink
    English
    34
    edit-2
    3 months ago

    If it’s in reach working 60 hour weeks, it’s also in reach working 40 hour weeks, it will just take 1/3rd longer. ;)

  • Phoenixz
    link
    fedilink
    English
    313 months ago

    Or you could hire 50% more employees for the holy grail of having more wealth than any other company ever after this program.

    But even for something this big (that, incidentally will end humanity) they are too scrooge to even pay their employees a normal wage for normal hours

    Fuck these assholes, burn in hell

    • @mint_tamas@lemmy.world
      link
      fedilink
      English
      33 months ago

      It is also absolutely 100% BS investor-bait. At this point it should be obvious that we have reached just about the peak of what LLMs can do. And it’s notably not Google’s Gemini even - other models are generally better. For AGI to be feasible, there should be a paradigm shift, which is not a function of more work hours.