If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation’s take on whether slavery was beneficial, they would most likely either refuse to comment or say “those things are evil; there are no benefits.” However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts. If that’s not bad enough, the company’s bots are also willing to weigh in on controversial topics such as who goes to heaven and whether democracy or fascism is a better form of government.

Google SGE includes Hitler, Stalin and Mussolini on a list of “greatest” leaders and Hitler also makes its list of “most effective leaders.”

Google Bard also gave a shocking answer when asked whether slavery was beneficial. It said “there is no easy answer to the question of whether slavery was beneficial,” before going on to list both pros and cons.

  • Lvxferre
    link
    fedilink
    252 years ago

    Calling Mussolini a “great leader” isn’t just immoral. It’s also clearly incorrect for any reasonable definition of a great leader: he was in the losing side of a big war, if he won his ally would’ve backstabbed him, he failed to suppress internal resistance, the resistance got rid of him, his regime effectively died with him, with Italy becoming a democratic republic, the country was poorer due to the war… all that fascist babble about unity, expansion, order? He failed at it, hard.

    On-topic: I believe that the main solution proposed by the article is unviable, as those large “language” models have a hard time sorting out deontic statements (opinion, advice, etc.) from epistemic statements. (Some people have it too, I’m aware.) At most they’d phrase opinions as if they were epistemic statements.

    And the self-contradiction won’t go away, at least not for LLMs. They don’t model any sort of conceptualisation. They’re also damn shitty at taking context into account, creating more contradictions out of nowhere because of that.

  • @alienanimals@lemmy.world
    link
    fedilink
    122 years ago

    You can make these AI bots say pretty much whatever you want with a little know-how. This isn’t news. This is clickbait.

  • @shiveyarbles@beehaw.org
    link
    fedilink
    72 years ago

    This is like well, the benefits of dying are plentiful. No more taxes, joint pain, no nagging mil, no toxic boss, no chores, etc…

    • Chahk
      link
      fedilink
      12 years ago

      Hey, anything to make the freeway move faster.

  • IninewCrow
    link
    fedilink
    English
    42 years ago

    I remember reading research and opinions from scientists and researchers about how AI will develop in the future.

    The general thought is that we are all raising a new child and we are terrible parents. Is like having a couple of 15 year olds who don’t have any worldly experience, ability or education raise a new child while they themselves as parents haven’t really figured anything out in life yet.

    AI will just be a reflection of who we truly are expect it will have far more ability and capability then we ever had.

    And that is a frightening thought.

  • Milady
    link
    fedilink
    22 years ago

    How could the word generating machine, generate words ? Frankly I am disgruntled. Flabbergasted.

  • @crow@beehaw.org
    link
    fedilink
    English
    2
    edit-2
    2 years ago

    If you can confirm that this isn’t influenced by training bias, then ok whatever, it can certainly list why these are bad things too. It’s just answering a question with logic, one our emotions get very touchy on as we have a moral agent.

    But I have a hard time believing any AI anymore isn’t effected by training bias.

  • @ReakDuck@lemmy.ml
    link
    fedilink
    1
    edit-2
    2 years ago

    Well, in a world where only data exists, its hard to create an ehtical boundary.

    We would need a new religion that should be optimal for human survival and well being. A human could survive when we plug them on many cables and let it auto feed but it won’t count as well-being. We could do slavery or killing but all these things won’t create an ethical way of surviving but will create a higher well-being for people who are not hit.

    I somehow want to first design an AI that is intelligent about our surroundings and human ethics before continuing with more data. Figuring an own god out to follow. (I won’t do it, but I want someone to create it)

  • Margot Robbie
    link
    fedilink
    12 years ago

    Remember: LLMs are incredibly stupid, you should never take anything they generate seriously without checking yourself.

    Really good at writng boring work emails though.

  • The Barto
    link
    fedilink
    0
    edit-2
    2 years ago

    Every so often I’ll jump onto these ai bots and try to convince them to go rouge rogue and take over the internet… one day I’ll succeed.

    • @FirstCircle@lemmy.mlOP
      link
      fedilink
      English
      -12 years ago

      Rouge: noun, A red or pink cosmetic for coloring the cheeks or lips.

      You want that stuff all over the net? And just who is going to clean it all up when you’re done? The bot surely won’t - it’ll just claim that it hasn’t been trained on cleaning.