Great headline, but ask fusion how long they have been 20 years away and how many more years they have…

  • IninewCrow
    link
    fedilink
    English
    225 months ago

    I don’t think AI will wipe us out

    I think we will wipe ourselves out first.

    • Transient Punk
      link
      fedilink
      English
      145 months ago

      We are the “creators” of AI, so if it wipes us out, that would be us wiping ourselves out.

      In the end, short of a natural disaster (not climate change), we will be our own doom.

      • IninewCrow
        link
        fedilink
        English
        35 months ago

        My thinking is that we will probably wipe ourselves out ourselves through war / conflict / nuclear holocaust before AI ever gets to the point of having any kind of power or influence to affect the planet or humanity as a whole.

    • @7rokhym@lemmy.ca
      link
      fedilink
      English
      7
      edit-2
      5 months ago

      Growing up years ago, I found a book on my parents bookshelf. I wish I’d kept track of it, but it had a cartoon of 2 Martians standing on Mars watching the Earth explode and one commented to the other along the lines that intelligent life forms must have lived there to accomplish such a feat. I was probably 8 or 9 at the time, but it’s stuck with me.

      It only took a Facebook recommendation engine with some cell phones to excite people into murdering each other in the ongoing Rohingya genocide. https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html

      We don’t need AI, and at this point it uses so much electricity that it is probably the first thing that would get shut down in a shit hits the fan moment.

  • @kritzkrieg@lemm.ee
    link
    fedilink
    English
    165 months ago

    Ngl, I kinda hate these articles because they feel so…click baity? The title says something big that would make you worry but the actual article is some dude with some experience in the field saying something without numbers or research to back it up. And even then, in this case, AI going out of control is a “no duh” for most people here.

  • @aesthelete@lemmy.world
    link
    fedilink
    English
    12
    edit-2
    5 months ago

    Especially if we let its half baked incarnations operate our cars and act as a claims adjuster for our for profit healthcare system.

    AI is already killing people for profit right now.

    But, I know, I know, slow, systemic death of the vulnerable and the ignorant is not as tantalizing a storyline as doomsday events from Hollywood blockbusters.

      • @aesthelete@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        5 months ago

        An “AI” operated machine gun turret doesn’t have to be sentient in order to kill people.

        I agree that people are the ones allowing these things to happen, but software doesn’t have to have agency to appear that way to laypeople and when people are placed in a “managerial” or “overseer” role they behave as if the software knows more than they do even when they’re subject matter experts.

        • @daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          1
          edit-2
          5 months ago

          Would it be different if instead of LLM the AI operated machine gun or the corporate software where driven just by traditional algorithms when it comes to that ethical issue?

          Because a machine gun does not need “modern” AI to be able to take aim and shoot at people, I guarantee you that.

          • @aesthelete@lemmy.world
            link
            fedilink
            English
            3
            edit-2
            5 months ago

            No, it wouldn’t be different. Though it’d definitely be better to have a discernable algorithm / explicable set of rules for things like health care. Them shrugging their shoulders and saying they don’t understand the “AI” should be completely unacceptable.

            I wasn’t saying AI = LLM either. Whatever drives Teslas is almost certainly not an LLM.

            My point is half-baked software is already killing people daily, but because it’s more dramatic to pontificate about the coming of skynet the “AI” people waste time on sci-fi nonsense scenarios instead of drawing any attention to that.

            Fighting the ills bad software are already causing today would also do a lot to advance the cause of preventing bad software from reaching the imagined apocalyptic point in the future.

  • @conditional_soup@lemm.ee
    link
    fedilink
    English
    45 months ago

    I see your line about fusion, and I’d like to raise the point that commercial fusion reactors are no longer coming in thirty years but five. Now, is it hype? It’s unclear to me because I’m honestly fucking drowning in cope, my dudes. But Commonwealth Fusion, a spinoff of MIT’s fusion group, is building, right now, a commercial fusion reactor in Virginia. They did some really cool shit with high(er) temp superconducting magnets in their tokamak design and project that they can break Q10 (that is, get 10x the energy out that they put in) at scale. They’re also licensing and building these reactors for other interested parties IIRC.

    They’re not the only ones. There’s a few other companies that are working on fusion that seem to be making some really exciting strides, and I know China’s also made some pretty impressive advances as well. Livermore Labs also claims to have broken unity in 2014 with laser-cartridge-implosion, but AFAIK on peer review, it turned out that they used some sketchy-ass math to make that case, not to mention that that tech can’t really scale well. Since then, I seem to remember that there’s been several other claims of having broken unity (at least one of which was Livermore Labs again) though I have no idea as to how well they hold up to peer review. The point is that we’re actually finally seeing some movement in the field of nuclear fusion, including the ongoing development of commercial grid-scale reactors by at least one venture. I don’t think it’s enough to get fusion out of its infamous doghouse, not yet, but it’s worth being aware of.

  • @ZILtoid1991@lemmy.world
    link
    fedilink
    English
    45 months ago

    How people think AI will wipe our humanity: Terminator!

    How it will actually will wipe humanity: global warming caused by the power consumption of the data centers, water shortages caused by the water usage of data centers, etc.

    • @Gumus@lemmy.world
      link
      fedilink
      English
      15 months ago

      I’ve seen such comments multiple times and it makes me curious… What do you think actually happens when water is used in datacenters?