• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
  • MudMan
    link
    fedilink
    542 years ago

    Oh, you mean it wasn’t just concidence that the moment OpenAI, Google and MS were in position they started caving to oversight and claiming that any further development should be licensed by the government?

    I’m shocked. Shocked, I tell you.

    I mean, I get that many people were just freaking out about it and it’s easy to lose track, but they were not even a little bit subtle about it.

    • @Kaidao@lemmy.ml
      link
      fedilink
      English
      152 years ago

      Exactly. This is classic strategy for first movers. Once you hold the market, use legislation to dig your moat.

      • MudMan
        link
        fedilink
        152 years ago

        At worst it’ll be a similar impact to social media and big data.

        Try asking the big players what they think of heavily limiting and regulating THOSE fields.

        They went all “oh, yeah, we’re totally seeing the robot apocalypse happening right here” the moment open source alternatives started to pop up because at that point regulatory barriers would lock those out while they remain safely grandfathered in. The official releases were straight up claiming only they knew how to do this without making Skynet, it was absurd.

        Which, to be clear, doesn’t mean regulation isn’t needed. On all of the above. Just that the threat is not apocalyptic and keeping the tech in the hands of these few big corpos is absolutely not a fix.

  • Margot Robbie
    link
    fedilink
    English
    372 years ago

    Why do you think Sam Altman is always using FUD to push for more AI restrictions? He already got his data collection, so he wants to make sure "“Open”"AI is the only game in town and prevent any future competition from obtaining the same amount of data they collected.

    Still, I have to give Zuck his credit here, the existence of open models like LLaMa 2 that can be fine-tuned and ran locally has really put a damper on OpenAI’s plans.

  • Elias Griffin
    link
    fedilink
    English
    312 years ago

    “Ng said the idea that AI could wipe out humanity could lead to policy proposals that require licensing of AI”

    Otherwise stated: Pay us to overregulate and we’ll protect you from extinction. A Mafia perspective.

    • @ohlaph@lemmy.world
      link
      fedilink
      English
      72 years ago

      Right?!?!! Lines are obvious. Only if they thought they could get away with it, and they might, actually, but also what if?!?!

  • @JadenSmith@sh.itjust.works
    link
    fedilink
    English
    14
    edit-2
    2 years ago

    Lol how? No seriously, HOW exactly would AI ‘wipe out humanity’???

    All this fear mongering bollocks is laughable at this point, or it should be. Seriously there is no logical pathway to human extinction by using AI and these people need to put the comic books down.
    The only risks AI pose are to traditional working patterns, which have been always exploited to further a numbers game between Billionaires (and their assets).

    These people are not scared about losing their livelihoods, but losing the ability to control yours. Something that makes life easier and more efficient requiring less work? Time to crack out the whips I suppose?

    • @BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      13
      edit-2
      2 years ago

      Working in a corporate environment for 10+ years I can say I’ve never seen a case where large productivity gains turned into the same people producing even more. It’s always fewer people doing the same amount of work. Desired outputs are driven less by efficiency and more by demand.

      Let’s say Ford found a way to produce F150s twice as fast. They’re not going to produce twice as many, they’ll produce the same amount and find a way to pocket the savings without benefiting workers or consumers at all. That’s actually what they’re obligated to do, appease shareholders first.

    • @Plague_Doctor@lemmy.world
      link
      fedilink
      English
      52 years ago

      I mean I don’t want an AI to do what I do as a job. They don’t have to pay the AI and food and housing, in a lot of places, aren’t seen as a human right, but a privilege you are allowed if you have money to buy it.

  • @Substance_P@lemmy.world
    link
    fedilink
    English
    132 years ago

    When Google’s annual revenue from its search engine is estimated to be around $70 to $80 billion, no wonder there is great concern from big tech about the numerous A.I tools out there, that would spell an end to that fire hose of sweet sweet monetization.

  • DarkThoughts
    link
    fedilink
    132 years ago

    Enforce privacy friendliness & open source through regulation and all three of those points are likely mood.

  • @fubo@lemmy.world
    link
    fedilink
    English
    62 years ago

    The tech companies did not invent the AI risk concept. Culturally, it emerged out of 1990s futurism.

    • @ripe_banana@lemmy.world
      link
      fedilink
      English
      162 years ago

      Imo, Andrew Ng is actually a cool guy. He started coursera and deeplearning.ai to teach ppl about machine/deep learning. Also, he does a lot of stuff at Stanford.

      I wouldn’t put him in the corporate shill camp.

        • @elliot_crane@lemmy.world
          link
          fedilink
          English
          72 years ago

          He really is. He’s one of those rare instructors that can take the very complex and intricate topics and break them down into something that you can digest as a student, while still giving you room to learn and experiment yourself. In essence, an actual master at his craft.

          I also agree with the comment that he doesn’t come across as the corporate shill type, much more like a guy that just really loves ML/AI and wants to spread that knowledge.

        • AtHeartEngineer
          link
          fedilink
          English
          32 years ago

          Same, I went from kind of understanding most of the concepts to grokking a lot of it pretty well. He’s super good at explaining things.

        • @ripe_banana@lemmy.world
          link
          fedilink
          English
          22 years ago

          This looks like it’s from the aifund thing he is a part of, but it seems like they took that part out. I have never worked for of those companies so idk 🤷‍♂️.

    • @TwilightVulpine@lemmy.world
      link
      fedilink
      English
      12 years ago

      The way capitalism may use current AI to cut off a lot of people from any chance at a livelihood is much more plausible and immediately concerning than any machine apocalypse.

  • @Socsa@sh.itjust.works
    link
    fedilink
    English
    3
    edit-2
    2 years ago

    Ok, you know what? I’m in…

    If all the crazy people in the world collectively stop spending crazy points on sky wizards and climate skepticism, and put all of their energy into AI doomerism, I legitimately think the world might be a better place.

  • @Fades@lemmy.world
    link
    fedilink
    English
    32 years ago

    Obviously a part of the equation. All of these people with massive amounts of wealth power and influence push for horrific shit primarily because it’ll make them a fuck ton of money and the consequences won’t hit till they’re gone so fuck it