Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • @BertramDitore@lemm.ee
    link
    fedilink
    English
    511 month ago

    I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

    I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

    Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

    Every step of any deductive process needs to be citable and traceable.

    • @DomeGuy@lemmy.world
      link
      fedilink
      101 month ago

      Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.

    • @davidgro@lemmy.world
      link
      fedilink
      71 month ago

      … I want clear evidence that the LLM … will never hallucinate or make something up.

      Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.

      • mosiacmango
        link
        fedilink
        9
        edit-2
        1 month ago

        If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.

        The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”

        • Redex
          link
          fedilink
          21 month ago

          The way generative AI works means no matter how good the data it’s still gonna bullshit and lie, it won’t “know” if it knows something or not. It’s a chaotic process, no ML algorithm has ever produced 100% correct results.

          • mosiacmango
            link
            fedilink
            31 month ago

            That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.

            They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.

            • @davidgro@lemmy.world
              link
              fedilink
              11 month ago

              There’s no such thing as perfect data. Especially if there’s even the slightest bit of subjectivity involved.

              Even less existent is complete data.

              • mosiacmango
                link
                fedilink
                11 month ago

                Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.

                They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.

                It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.

                • @davidgro@lemmy.world
                  link
                  fedilink
                  11 month ago

                  The original thread poster (OTP?) implied perfection when they emphasized the “will never” part, and I was responding to that. For that matter it also excludes actual brains.

      • @BertramDitore@lemm.ee
        link
        fedilink
        English
        41 month ago

        Let’s say I open a medical textbook a few different times to find the answer to something concrete, and each time the same reference material leads me to a different answer but every answer it provides is wrong but confidently passes it off as right. Then yes, that medical textbook should be banned.

        Quality control is incredibly important, especially when people will use these systems to make potentially life-changing decisions for them.

        • @davidgro@lemmy.world
          link
          fedilink
          41 month ago

          especially when people will use these systems to make potentially life-changing decisions for them.

          That specifically is the problem. I don’t have a solution, but treating and advertising these things like they think and know stuff is a mistake that of course the companies behind them are encouraging.

    • @venusaur@lemmy.worldOP
      link
      fedilink
      31 month ago

      This is awesome! The citing and tracing is already improving. I feel like no hallucinations is gonna be a while.

      How does it all get enforced? FTC? How does this become reality?

    • @minoscopede@lemmy.world
      link
      fedilink
      11 month ago

      I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

      Every step of any deductive process needs to be citable and traceable.

      I mostly agree, but “never” is too high a bar IMO. It’s way, way higher than the bar even for humans. Maybe like 0.1% or something would be reasonable?

      Even Einstein misremembered things sometimes.

    • OpenAI, for example, needs to be regulated with the same intensity as a much smaller company

      not too long ago they went to Congress to get them to regulate the ai industry a lot more and wanted the govt to require licences to train large models. Large companies can benefit from regulations when they aren’t easy for smaller competitors to follow.

      And OpenAI should have no say in how they are regulated.

      For sure, otherwise regulation could be made too restrictive, lowing competition

      Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

      I think thats technically really difficult, but maybe if the output of the model was checked against preexisting sources that could happen, like what Google uses for Gemini

      Every step of any deductive process needs to be citable and traceable.

      I’m pretty sure this is completely impossible

  • Jeffool
    link
    fedilink
    281 month ago

    Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I’d love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.

    I don’t love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I’m not sure it’s enough to really matter. I think we need bigger “what if” scenarios to handle that.

  • @Furbag@lemmy.world
    link
    fedilink
    271 month ago

    Long, long before this AI craze began, I was warning people as a young 20-something political activist that we needed to push for Universal Basic Income because the inevitable march of technology would mean that labor itself would become irrelevant in time and that we needed to hash out a system to maintain the dignity of every person now rather than wait until the system is stressed beyond it’s ability to cope with massive layoffs and entire industries taken over by automation/AI. When the ability of the average person to sell their ability to work becomes fundamentally compromised, capitalism will collapse in on itself - I’m neither pro- nor anti-capitalist, but people have to acknowledge that nearly all of western society is based on capitalism and if capitalism collapses then society itself is in jeopardy.

    I was called alarmist, that such a thing was a long way away and we didn’t need “socialism” in this country, that it was more important to maintain the senseless drudgery of the 40-hour work week for the sake of keeping people occupied with work but not necessarily fulfilled because the alternative would not make the line go up.

    Now, over a decade later, and generative AI has completely infiltrated almost all creative spaces and nobody except tech bros and C-suite executives are excited about that, and we still don’t have a safety net in place.

    Understand this - I do not hate the idea of AI. I was a huge advocate of AI, as a matter of fact. I was confident that the gradual progression and improvement of technology would be the catalyst that could free us from the shackles of the concept of a 9-to-5 career. When I was a teenager, there was this little program you could run on your computer called Folding At Home. It was basically a number-crunching engine that uses your GPU to fold proteins, and the data was sent to researchers studying various diseases. It was a way for my online friends and I to flex how good our PC specs were with the number of folds we could complete in a given time frame and also we got to contribute to a good cause at the same time. These days, they use AI for that sort of thing, and that’s fucking awesome. That’s what I hope to see AI do more of - take the rote, laborious, time consuming tasks that would take one or more human beings a lifetime to accomplish using conventional tools and have the machine assist in compiling and sifting through the data to find all the most important aspects. I want to see more of that.

    I think there’s a meme floating around that really sums it up for me. Paraphrasing, but it goes “I thought that AI would do the dishes and fold my laundry so I could have more time for art and writing, but instead AI is doing all my art and writing so I have time to fold clothes and wash dishes.”.

    I think generative AI is both flawed and damaging, and it gives AI as a whole a bad reputation because generative AI is what the consumer gets to see, and not the AI that is being used as a tool to help people make their lives easier.

    Speaking of that, I also take issue with that fact that we are more productive than ever before, and AI will only continue to improve that productivity margin, but workers and laborers across the country will never see a dime of compensation for that. People might be able to do the work of two or even three people with the help of AI assistants, but they certainly will never get the salary of three people, and it means that two out of those three people probably don’t have a job anymore if demand doesn’t increase proportionally.

    I want to see regulations on AI. Will this slow down the development and advancement of AI? Almost certainly, but we’ve already seen the chaos that unfettered AI can cause to entire industries. It’s a small price to pay to ask that AI companies prove that they are being ethical and that their work will not damage the livelihood of other people, or that their success will not be born off the backs of other creative endeavors.

    • 𝕱𝖎𝖗𝖊𝖜𝖎𝖙𝖈𝖍
      link
      fedilink
      7
      edit-2
      1 month ago

      Fwiw, I’ve been getting called an alarmist for talking about Trump’s and Republican’s fascist tendencies since at least 2016, if not earlier. I’m now comfortably living in another country.

      My point being that people will call you an alarmist for suggesting anything that requires them to go out of their comfort zone. It doesn’t necessarily mean you’re wrong, it just shows how stupid people are.

        • It wasn’t overseas but moving my stuff was expensive, yes. Even with my company paying a portion of it. It’s just me and my partner in a 2br apartment so it’s honestly not a ton of stuff either.

  • @naught101@lemmy.world
    link
    fedilink
    251 month ago

    TBH, it’s mostly the corporate control and misinformation/hype that’s the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people’s capacity for critical thinking.

    ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights…

    So yeah, uh… Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.

  • Rose
    link
    fedilink
    221 month ago

    The technology side of generative AI is fine. It’s interesting and promising technology.

    The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.

    We need legislation to catch up. We also need society to be able to catch up. We can’t let the AI bros continue to foist more “helpful tools” on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.

    • @theherk@lemmy.world
      link
      fedilink
      31 month ago

      I agree, but I’d take it a step further and say we need legislation to far surpass the current conditions. For instance, I think it should be governments leading the charge in this field, as a matter of societal progress and national security.

  • @barryamelton@lemmy.ml
    link
    fedilink
    191 month ago

    That stealing copyrighted works would be as illegal for these companies as it is for normal people. Sick and tired of seeing them get away with it.

  • @boaratio@lemmy.world
    link
    fedilink
    18
    edit-2
    1 month ago

    For it to go away just like Web 3.0 and NFTs did. Stop cramming it up our asses in every website and application. Make it opt in instead of maybe if you’re lucky, opt out. And also, stop burning down the planet with data center power and water usage. That’s all.

    Edit: Oh yeah, and get sued into oblivion for stealing every copyrighted work known to man. That too.

    Edit 2: And the tech press should be ashamed for how much they’ve been fawning over these slop generators. They gladly parrot press releases, claim it’s the next big thing, and generally just suckle at the teet of AI companies.

  • z3rOR0ne
    link
    fedilink
    171 month ago

    There’s too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who’s controlling it and who’s not.

    We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else’s artwork really really fast…that is all it does, at least for the public sector currently.

    For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.

    AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don’t inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn’t been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it’s interesting to them.

    There’s too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.

    Here are further resources if you didn’t get enough ranting.

    lemmy.world’s fuck_ai community

    System Crash Podcast

    Tech Won’t Save Us Podcast

    Better Offline Podcast

    • @venusaur@lemmy.worldOP
      link
      fedilink
      11 month ago

      I love the passion! Was this always our fate? Can we not adapt like we have so many times in human history?

  • @daniskarma@lemmy.dbzer0.com
    link
    fedilink
    171 month ago

    I’m not against it as a technology. I use it for my personal use, as a toy, to have some fun or to whatever.

    But what I despise is the forced introduction everything. AI written articles and AI forced assistants in many unrelated apps. That’s what I want to disappear, how they force in lots of places.

  • @Treczoks@lemmy.world
    link
    fedilink
    161 month ago

    Serious investigation into copyright breaches done by AI creators. They ripped off images and texts, even whole books, without the copyright owners permissions.

    If any normal person broke the laws like this, they would hand out prison sentences till kingdom come and fines the size of the US debt.

    I just ask for the law to be applied to all equally. What a surprising concept…

  • @MisterCurtis@lemmy.world
    link
    fedilink
    161 month ago

    Regulate its energy consumption and emissions. As a whole, the entire AI industry. Any energy or emissions in effort to develop, train, or operate AI should be limited.

    If AI is here to stay, we must regulate what slice of the planet we’re willing to give it. I mean, AI is cool and all, and it’s been really fascinating watching how quickly these algorithms have progressed. Not to oversimplify it, but a complex Markov chain isn’t really worth the energy consumption that it currently requires.

    A strict regulation now, would be a leg up in preventing any rogue AI, or runaway algorithms that would just consume energy to the detriment of life. We need a hand on the plug. Capitalism can’t be trusted to self regulate. Just look at the energy grabs all the big AI companies have been doing already (xAI’s datacenter, Amazon and Google’s investments into nuclear). It’s going to get worse. They’ll just keep feeding it more and more energy. Gutting the planet to feed the machine, so people can generate sexy cat girlfriends and cheat in their essays.

    We should be funding efforts to utilize AI more for medical research. protein folding , developing new medicines, predicting weather, communicating with nature, exploring space. We’re thinking to small. AI needs to make us better. With how much energy we throw at it we should be seeing something positive out of that investment.

    • @medgremlin@midwest.social
      link
      fedilink
      11 month ago

      These companies investing in nuclear is the only good thing about it. Nuclear power is our best, cleanest option to supplement renewables like solar and wind, and it has the ability to pick up the slack when the variable power generation doesn’t meet the variable demand. If we can trick those mega-companies into lobbying the government to allow nuclear fuel recycling, we’ll be all set to ditch fossil fuels fairly quickly. (provided they also lobby to streamline the permitting process and reverse the DOGE gutting of the government agency that provides all of the startup loans used for nuclear power plants.)

    • @4am@lemm.ee
      link
      fedilink
      91 month ago
      • Trained on stolen ideas: ✅
      • replacing humans who have little to no safety net while enriching an owner class: ✅
      • disregard for resource allocation, use, and pollution in the pursuit of profit: ✅
      • being forced into everything as to become unavoidable and foster dependence: ✅

      Hey wow look at that, capitalism is the fucking problem again!

      God we are such pathetic gamblemonkeys, we cannot get it together.

  • Brave Little Hitachi Wand
    link
    fedilink
    English
    141 month ago

    Part of what makes me so annoyed is that there’s no realistic scenario I can think of that would feel like a good outcome.

    Emphasis on realistic, before anyone describes some insane turn of events.

  • 𝕱𝖎𝖗𝖊𝖜𝖎𝖙𝖈𝖍
    link
    fedilink
    13
    edit-2
    1 month ago

    I’m perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!

    It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I’d say that’ll happen when pigs fly, but police choppers exist so idk