My short response. Yes.

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    4 days ago

    marx talked about it. with sufficient automation, the value of Labor collapses. under socialism, this is a good thing. under capitalism it’s a bad thing.

        • Lunatique Princess@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          It’s essentially a governance model driven by scientific, technical, and data-driven analysis. This would include control and input from Universities and Silicon Valley. The problem are the corporations that own a huge portion of SV are not benevolent in their practices on an employment level, a consumer level and certainly a powerful over ruling governmental level.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 days ago

        money would either become worthless, or would have to stop representing labor. you would have two distinct classes with zero mobility between them. im taking a shit

    • pineapple@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      THIS!

      If ai takes all our jobs the only way forward is communism, otherwise the working class will collapse and the capitalist class will collapse alongside.

  • tyo_ukko@sopuli.xyz
    link
    fedilink
    arrow-up
    32
    arrow-down
    1
    ·
    5 days ago

    No. The movies get it all wrong. There won’t be terminators and rogue AIs.

    What there will be is AI slop everywhere. AI news sites already produce hallucinated articles, which other AIs refer to and use as training data. Soon you cannot believe anything you read online, and fact checking will be basically impossible.

    • unwarlikeExtortion@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      5 days ago

      Soon you cannot believe anything you read online.

      That’s a bit too blanket of a statement.

      There are, always were, and always will be reputable sources. Online or in print. Writteb or not.

      What AI will do is increase the amount of slop disproportionately. What it won’t do is suddenly make the real, actual, reputable sources magically disappear. Finding may become harder, but people will find a way - as they always do. New search engines, curated indexes of sites. Maybe even something wholly novel.

      .gov domains will be as reputable as the administration makes them - with or without AI.

      Wikipedia, so widely hated in academia, is proven to be at least as factual as Encyclopedia Britannica. It may be harder for it to deal with spam than it was before, but it mostly won’t be phased.

      Your local TV station will spout the same disinformation (or not) - with or without AI.

      Using AI (or not) is a management-level decision. What use of AI is or isn’t allowed is as well.

      AI, while undenkably a gamechanger, isn’t as big a gamechanger as it’s often sold as, and the parallels between the AI and the dot-com bubble are staggering, so bear with me for a bit:

      Was dot-com (the advent of the corporate worldwide Internet) a gamechanger? Yes.

      Did it hurt the publishing industry? Yes.

      But is the publishing industry dead? No.

      Swap “AI” for dot-com and “credible content” for the publishing industry and you have your boring, but realistic answer.

      Books still exist. They may not be as popular, but they’re still a thing. CDs and vinyl as well. Not ubiquitous, but definitely chugging along just fine. Why should “credible content” die, when the disruption AI causes to the intellectual supply chain is so much smaller than suddenly needing a single computer and an Internet line instead of an entire large-scale printing setup?

    • Lunatique Princess@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      8
      ·
      5 days ago

      I agree with the slop part but you can’t say the movies get it all wrong if it hasn’t gotten to the point where it can be proven or disproven yet.

  • DJKJuicy@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    4 days ago

    If/when we actually achieve Artificial Intelligence, then maybe it would be a concern.

    What we have today are LLMs which are big dumb parrots that just say things back to you that match a pattern. There is no actual intelligence.

    Calling our current LLMs “Artificial Intelligence” is just marketing. LLMs have been possible for a while but until recently we just didn’t have the processing power at the scale we have now.

    Once everyone realizes they’ve been falling for a marketing campaign and that we’re not very much closer to AI than we were before LLMs blew up, then LLMs will just become what they actually are: a tool that enhances human intelligence.

    I could be wrong though. If so, I, for one, welcome our new AI overlords.

      • DJKJuicy@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        4 days ago

        I don’t think we’re any closer to AGI due to LLMs. If you take away all the marketing misdirection, to achieve AGI you would have to have artificial rational thought.

        LLMs have no rational thought. They just don’t. That’s not how they’re designed.

        Again, I could be wrong. If so, I was always in support of the machines.

        • SkaveRat@discuss.tchncs.de
          link
          fedilink
          arrow-up
          4
          ·
          4 days ago

          I don’t think we’re any closer to AGI

          never said we did. Just that LLMs are included in the very broad definition that is “AI”

          • demonquark@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            4 days ago

            Tbf, the phrase “as the movies say”, makes it reasonable to assume that OP meant AGI. Not the broad definition of AI.

            I mean, when is the last time you saw a movie about the dangers of the k-nearest neighbor algorithm?

  • Gates9@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 days ago

    First it’s gonna crash the economy because it doesn’t work then it’s gonna crash the economy because it does

  • Jhex@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    4 days ago

    what movie?

    Terminator? no, our level of AI is ridiculously far from that

    The Big Short? yes, that bubble is going to pop and bring the world economy down

  • QuinnyCoded@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 days ago

    No. I think we’re essentially where AI will stop improving in the LLM department, image/video generation might get better though.

    I assume within 5 years CEOS will stop advertising AI generated on stuff, but things like shitty t shirts and stuff will still have AI, they just won’t be marketed as. Back in the day things were marketed as plastic as a positive thing before slowly becoming a negative selling point, I assume AI will be similar.

    other than phones. There’s no other improvement they can market like gimmicks or nostalgia bait.

  • UltraGiGaGigantic@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    5 days ago

    AI (once it is actually here) is just a tool. Much like other tools, its impact will be dependent on who is using and and what for.

    Who do you feel has the most agency in our current status quo? What are they currently doing? These will answer your question.

    Its the 1%, and they will build a fully automated army and get rid of all but the sexiest of us to keep as sex slaves.

    This is worth it because capitalism is the most important thing on planet earth. Not humanity, capitalism. Thus the vasectomy. The 1% can make their own slaves. And with AI they will.

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    It will be worse than the movies because they don’t portray how every mundane thing will somehow be worse. Tech support? Worse. Customer service? Worse. Education? Worse. Insurance? Worse. Software? Worse. Health care? Worse. Mental health? Worse. Misinformation? Pervasive. Gaslighting? Pervasive.

    • III@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Movie AI isn’t what we are headed for. This is what we are headed for. Where’s that movie?

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    5 days ago

    Short answer: No one today can know with any amount of certainty because we’re nowhere close to developing anything resembling “AI” in the movies. Today’s generative AI is so far from artificial general intelligence it would be like asking someone from the middle ages when the only form of remote communication was letters and messengers, whether social media will ruin society.

    Long answer:

    First we have to define what “AI” is. The current zeitgeist meaning of “AI” refers to LLMs, image generators, and other generative AI, which is nowhere close to anything resembling real consciousness and therefore can be neither evil nor good. It can certainly do evil things, but only at the direction of evil humans, who are the conscious beings in control. Same as any other tool we’ve invented.

    However, generative AI is just one class of neural network, and neural networks as a whole was once the colloquial definition of “AI” before ChatGPT. There have been simpler, single purpose neural networks before it, and there will certainly be even more complex neural networks after it. Neural networks are modeled after animal brains: nodes are analogous to neurons which either fully fire or doesn’t fire at all depending on input from the neurons it’s connected to, connections between nodes are analogous to connections between axons and dendrites, and neurons can up or down regulate input from different neurons similar to the weights applied to neural networks. Obviously, real nerve cells are much more complex than the simple mathematical representations of neural networks, but neural networks do show similar traits to networks of neurons in a brain, so it’s not inconceivable that in the future, we could potentially develop a neural network as or more complex than a human brain, at which point it could start exhibiting traits that are suggestive of consciousness.

    This brings us to the movie definition of “AI,” which is generally “conscious” AI as or more intelligent than a human. A being with an internal worldview, independent thoughts and opinions, and an awareness of itself in relation to the world, currently traits only brains are capable of, and when concepts like “good” or “evil” can maybe start to be applicable. Again, just because neural networks are modeled after animal brains doesn’t prove it can emulate a brain as complex as humans have, but we also can’t prove it definitely won’t be able to with enough technical advancement. So the most we can say right now is that it’s not inconceivable, and if we do ever develop consciousness in our AI, we might not even know until much later because consciousness is difficult to assess.

    The scary part about a hypothetical artificial general intelligence is that once it exists, it can rapidly gain intelligence at a rate orders of magnitude faster than the evolution of intelligence in animals. Once it starts doing its own AI research and creating the next generation of AI, it will become uncontrollable by humanity. What happens after or whether we’ll even get close to this is impossible to know.

  • BarrelsBallot@lemmygrad.ml
    link
    fedilink
    arrow-up
    3
    ·
    5 days ago

    It will be as bad as it is now with an even higher intensity.

    We will see it continue to be used as a substitute for research, learning, critical or even surface level thinking, and interpersonal relationships.

    If and when our masters create an AI that is actually intelligent, and maybe even sentient as depicted in movies- it will be a thing that provides biased judgments behind a veneer of perceived objectivity due to its artificial nature. People will see it as a persona completely divorced from the prejudices of its creators as they do now with chat GPT. And who ever can influence this new “objective” truth will wield considerable power.

      • BarrelsBallot@lemmygrad.ml
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        Trust that I agree with you on this, I use the word “master” intentionally though- as we are subjected to their whims without any say in the matter.

        There are also many of us who are (unwittingly) dependent or addicted to their products / services. You and I both know plenty of people who give into almost every impulse incentivized by these products, especially when in the form of entertainment.

        Our communities are now choc full of slaves and solicitors- a master is an enemy yes, but only when his slaves know who owns them.

  • ℕ𝕖𝕞𝕠@slrpnk.net
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 days ago

    We’ve had AI in our everyday life for well over two decades now. What kind of AI specifically are you worried about?

  • comfy@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    “as bad”… not quite, and not in the same way. As other people have said, there’s no conscience to AI and I doubt there will be any financial incentive to develop one capable of “being evil” or doing some doomsday takeover. It’s a tool, it will continue to be abused by malicious actors, idiots will continue to trust it for things it can’t do properly, but this isn’t like the movies where it is malicious or murderous.

    It’s perfectly capable of, say, being used to push people into personalized hyperrealities (consider how political advertising was microtargeted in the Cambridge Analytica scandal, and consider how convincing fake AI imagery can be at a glance). It’s a more boring dystopia, but a powerful bad one nonetheless, capable of deconstructing societies to a large degree.