• @Allonzee@lemmy.world
    link
    fedilink
    English
    100
    edit-2
    1 year ago

    Humanity is surrounding itself with the harbingers of its own self-inflicted destruction.

    All in the name of not only tolerated avarice, but celebrated avarice.

    Greed is a more effectively harmful human impulse than even hate. We’ve merely been propagandized to ignore greed, oh im sorry “rational self-interest,” as the personal failing and character deficit it is.

    The widely accepted thought terminating cliche of “it’s just business” should never have been allowed to propagate. Humans should never feel comfortable leaving their empathy/decency at the door in our interactions, not for groups they hate, and not for groups they wish to exploit value from. Cruelty is cruelty, and doing it to make moooaaaaaar money for yourself makes it significantly more damning, not less.

    • Boozilla
      link
      fedilink
      English
      311 year ago

      Empathy and decency are scarce precious commodities. But the ruthless predatory “thought leaders” have been in charge ever since we clubbed the last neanderthal.

      “It Was Just Business” should be engraved on whatever memorial is left behind to mark our self-extinction.

    • bunnyfc
      link
      fedilink
      101 year ago

      Star Trek TNG had it pretty right in terms of what’s moral or what is desirable

  • @WhatIsThePointAnyway@lemmy.world
    link
    fedilink
    English
    44
    edit-2
    1 year ago

    Capitalism doesn’t care about humanity, only profits. Any safeguards self imposed will always fall to profitability in a capitalist system. It’s why regulations and a government people trust are important.

  • @Veedem@lemmy.world
    link
    fedilink
    English
    31
    edit-2
    1 year ago

    I mean is this stuff even really AI? It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out. I’m not sure this is the tech that will decide humanity is unnecessary.

    It’s just rebranded machine learning, IMO.

    • @kromem@lemmy.world
      link
      fedilink
      English
      14
      edit-2
      1 year ago

      It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out.

      Neither of these things are true.

      It does create world models (see the Othello-GPT papers, Chess-GPT replication, and the Max Tegmark world model papers).

      And while it is trained on predicting the next token, it isn’t necessarily doing it from there on out purely based on “most probable” as your sentence suggests, such as using surface statistics.

      Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”

      And that was a toy model.

      • @technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        0
        edit-2
        1 year ago

        Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”

        AKA Othello-GPT chooses moves based on statistics.

        Ofc it’s going to use a virtual board in this process. Why would a computer ever use a real one board?

        There’s zero awareness here.

    • @Pilferjinx@lemmy.world
      link
      fedilink
      English
      121 year ago

      The definitions and semantics are getting stressed to breaking points. We don’t have clear philosophy of mind for us humans let alone an overlay of other non human agents.

      • @dustyData@lemmy.world
        link
        fedilink
        English
        -3
        edit-2
        1 year ago

        We have 3 thousand years of tradition on philosophy of the mind, we have a clear idea. It’s just somewhat complex and difficult to grasp with, and there is still room for development and understanding. But this is like saying that we don’t have a clear philosophy of physics just because quantum physics is hard and there are things we don’t fully understand yet. As for non-human agents, what even is that? are dogs non-human agents? fish? virus? Computers are just the newest addition to the list of non-human agents we have philosophized about and we probably understand better the mind of other relatively simple life forms than our own. Definitions and semantics are always being stressed and are always breaking, that’s what symbols are for, that’s one of their main defining use cases. Go talk to an north-east African about rizz and tell me how that goes.

    • @erwan@lemmy.ml
      link
      fedilink
      English
      91 year ago

      OK, generative AI isn’t machine learning.

      But to get back to what AI is, the definition has been moving forever as AI becomes “just software” when it becomes ubiquitous. People were shocked that machines could calculate, then that they can play chess better than humans, then that they can read handwriting…

      The first mistake have been to invent the term to start with, as it implies thinking machine but they’re not.

      Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.

      • @blurg@lemmy.world
        link
        fedilink
        English
        11 year ago

        Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.

        Alan Turing puts it similarly, the question is nonsense. However, if you define “machine” and “thinking”, and redefine the question to mean: is machine thinking differentiable from human thinking; you can answer affirmatively, theoretically (rough paraphrasing). Though the current evidence suggests otherwise (e.g. AI learning from other AI drifts toward nonsense).

        For more, see: Computing Machinery and Intelligence, and Turing’s original paper (which goes into the Imitation Game).

  • Optional
    link
    fedilink
    English
    221 year ago

    Cry profit and let slip the dogs of enshittification

  • @technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    17
    edit-2
    1 year ago

    If these people actually cared about “saving humanity”, they would be attacking car dependency, pollution, waste, etc.

    Not making a shitty cliff notes machine.

  • Lung
    link
    fedilink
    English
    31 year ago

    Miss me with the doomsday news cycle capture, we aren’t even close to AI being a threat to ~anything

    (and all hail the AI overlords if it does happen, can’t be worse than politicians)

    • @4z01235@lemmy.world
      link
      fedilink
      English
      161 year ago

      AI on its own isn’t a threat, but people (mis)using and misrepresenting AI are. That isn’t a problem unique to AI but there sure are a lot of people doing dumb and bad things with AI right now.

        • @4z01235@lemmy.world
          link
          fedilink
          English
          61 year ago

          When was the last time you saw a corporation making decisions and taking actions of its own accord, without people?

          Maybe they will start to, now, as people delegate their responsibilities to “AI”

          • @Xeroxchasechase@lemmy.world
            link
            fedilink
            English
            31 year ago

            People are getiing paid by corporation to “do their job”. People who apeak up against the interest of the corporation are getting laid off. Unions are regularly busted to prevent collective actions and workers cooperation. CEO’s are getting piad by corporation stupid amounts of money to keep maximizing shareholders profits against everything else, even moral considerations.

            • @4z01235@lemmy.world
              link
              fedilink
              English
              11 year ago

              People decide who to hire for what roles and who to lay off. People form unions and people bust unions. The shareholders are people, and the decisions made in their interests are made by other people.

  • @drawerair@lemmy.world
    link
    fedilink
    English
    11 year ago

    I guess Altman thought “The ai race comes 1st. If Openai will lose the race, there’ll be nothing to be safe about.” But Openai is rich. They can afford to devote a portion of their resources to safety research.

    What if he thinks that the improvement of ai won’t be exponential? What if he thinks that it’ll be slow enough that Openai can start focusing on ai safety when they can see superintelligence’s approach from the distance? That focusing on safety now is premature? That surely is a difference in opinion compared to Sutskever and Leike.

    I think ai safety is key. I won’t be :o if Sutskever and Leike will go to Google or Anthropic.

    I was curious whether or not Google and Anthropic have ai safety initiatives. Did a quick search and saw this –

    For Anthropic, my quick search yielded none.