• wizblizz@lemmy.world
    link
    fedilink
    English
    arrow-up
    102
    arrow-down
    12
    ·
    2 months ago

    The fuck are all these comments? AI is shit, fuck AI. It fuels billionaires, destroys the environment, kills critical thinking, confidently tells you to off yourself, praises Hitler, advocates for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing, stop using it.

      • criscodisco@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        5
        ·
        2 months ago

        LLMs are shit, fuck LLMs. They fuel billionaires, destroy the environment, kill critical thinking, confidently tell you to off yourself, praise Hitler, advocate for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing, stop using it.

        And AI is a pipe dream no one is close to fulfilling, won’t be realized by feeding LLMs all of the data in existence, and billionaires are destroying our economy in their pursuit of it.

      • wizblizz@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        2 months ago

        Doesn’t work that way unfortunately. Ask a person on the street what AI is and theyll tell you whatever flavor slop generator they’re familiar with. You’re not going to see much pushback on ML around the Fediverse.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        2 months ago

        So what is AI in your opinion because LLMs fall under that umbrella.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 months ago

          My opinion. AI is a way to improve a computer models accuracy over time based on new data.

          I could even argue that ChatGPT etc. are not AI because the LLMs are not directly learning from the inputs they are receiving.

      • mechoman444@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        17
        ·
        2 months ago

        Change this out for any other technology that’s been innovated throughout human history. The printing press semiconductors the internet.

        The anti-ai rhetoric on this platform is becoming nonsensical.

        At this point it’s just bandwagon hate. These people don’t even understand the difference between llms and AIs and the various applications that they have.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          2 months ago

          Sorry don’t remember any of those other technologies using so much resources, raising prices for everyone else as they don’t pay the actual cost. And being wrong about stuff.

          • mechoman444@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            edit-2
            2 months ago

            They literally killed and excommunicated people after the invention of the printing press for producing unauthorized copies of the Bible. Figures like William Tyndale paid with their lives for translating scripture into English, challenging the Church’s authority.

            There is illicit material circulating freely on Tor, demonstrating that technology can distribute both knowledge and criminal content.

            Semiconductors underpin some of humanity’s most powerful and destructive technologies, from advanced military systems to cyberweapons. They are a neutral tool, but their applications have reshaped warfare and global power dynamics.

            You are fully entitled to dislike AI or technologies associated with it. But to dismiss it entirely is ignorant. Whether you want to believe it or not, we are on the precipice of a technological revolution, the shape of which remains uncertain, but its impact will be undeniable.

        • wizblizz@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          2 months ago

          Bullshit, fuck your false equivalency. This tech is good at generaating slop, propaganda, and destroying critical thinking. Thats it. It has zero value.

          • Rain World: Slugcat Game@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            2 months ago

            i mean, if by “this tech” you mean machine learning in general, then no, it has been used for good purposes(?), but if you mean this tech then absolutely

          • mechoman444@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            2 months ago

            Ok. This is clearly rage bait.

            You’re an ignorant fool and I’m probably not the first person to tell you that.

              • mechoman444@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                2 months ago

                You know what, fuck you and your bullshit holier than thou attitude.

                You’re a stupid piece of shit that will never amount to anything worth while other than being a sweat lord mod on your own Lemmy sub literally called “fuck ai”.

                Literally a sex bot programed by a Russian propaganda mill has more original thought than you.

                Seriously dude. You’re a cunt.

                • Rain World: Slugcat Game@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  2
                  ·
                  2 months ago

                  mmm, not just “propaganda mill”, but “Russian propaganda mill”?
                  oh, and you just looked at their profile after being demolished (with no prejudice)?

          • mechoman444@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            The internet printing press firearms semiconductors nfts the blockchain.

            A multitude of other technologies.

    • maplesaga@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      10
      ·
      2 months ago

      Same with the internet. Fuels billionaires, destroys the environment with data centers and cables, kills libraries and textbook research, spreads nazi propaganda. We need to stop using technology in general.

        • maplesaga@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          2 months ago

          Lets see a standard problem I’m randomly making up using a free AI, you tell me if this kind of thing can be useful to someone:

          If I have a bucket that is 1 meter tall and 1 meter wide how much volume can it hold?

          The volume V of a cylinder can be calculated using the formula:

          V=πr2h

          Where:

          r is the radius, h is the height.

          In this case, the bucket is 1 meter tall and 1 meter wide, which means the diameter is 1 meter. Therefore, the radius r is:

          r=21 meter​=0.5 meters

          Now substituting the values into the volume formula:

          V=π(0.5m)2(1m) V=π(0.25m2)(1m) V≈0.7854m3

          Thus, the volume the bucket can hold is approximately 0.785 cubic meters.

          • Tartas1995@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 months ago

            Using llms for math questions is probably the worst usage for llms.

            And all of this is easily calculated without ai. You can literally google it and let google do the math for you without ai.

            • maplesaga@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              2 months ago

              Perhaps your right, though the AI also allows natural language or voice, and further explanations.

              When you visualize a cylinder, think of stacking many thin circular disks (each with a height Δh) to build up the height h. The volume of each individual disk is its area πr2 multiplied by its infinitesimally small height Δh. When you aggregate these over the full height h, you arrive at the volume of the cylinder.

              Its also eroding all the bullshit we used to do, like cover letters and things that had no reason to exist besides wasting someones time. So truth be told I’m a fan, even if it is a massively unprofitable bubble, I also recognize its limitations given its hallucinations so I understand it shouldnt be relied upon for useful work.

              • Tartas1995@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                I won’t argue about the value of explanation from a lying hallucinating machine.

                But I like how your use case is “it does the things that I believe to be useless and time wasting for everyone involved. But instead of, pushing for the end of these time wasting acts, I waste a little less time with llms (instead of all of the time by not doing these time wasting acts) while still wasting the time of the reader.” What an efficient use case! We should violate IP law, waste drinking water and energy for it!

                • maplesaga@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  2 months ago

                  The problem is many people liked how it was, it makes more work to do, makes it seem official. I believe in that book bullshit jobs, and think most people are winging it with performative bullshit.

                  What I saw recently at my work is people received something that looked like AI slop from the head boss and they laughed about it, which got back to the boss, who then defended himself that it wasn’t AI.

                  So I’m hopeful that people are called out for wasting peoples time, and that long winded blobs of meaningless text become a firable offense.

                • Rain World: Slugcat Game@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  2 months ago

                  “IP law”? IP is not a good term. it suggests that ideas are property??
                  you see companies mining everything possible, such that dirt just goes <poof>! and your first thought is “oh no, the landowners had the right to those minerals! they should’ve bought a permit first!”(?)

            • maplesaga@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              2 months ago

              Thats well put, I’m under no naive assumption that LLMs are AI. Though I do think youre discounting the usefulness, as it did give the right answer, which is a fine use for average people doing basic math or whatever project theyre working on. I’m under no delusion that its replacing workers, unless someones job is writing fancy emails or building spreadsheets, and I do still think its a massive bubble.

    • mechoman444@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      12
      ·
      2 months ago

      The fuck are all these comments? The internet is shit, fuck the internet. It fuels billionaires, destroys the environment, kills critical thinking, confidently tells you to off yourself, praises Hitler, advocates for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing it. Stop using it.

    • But_my_mom_says_im_cool@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      14
      ·
      2 months ago

      Which ai and for which use? It’s a tool. It’s like getting mad cause a guy invented a hammer. It’s not the tool hurting you dude, it’s the people wielding it.

      • starelfsc2@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        2 months ago

        If that hammer also had massive environmental impacts and hammers were pushed into every aspect of your life while also stealing massive amounts of copyrighted data, sure. It’s very useful for problems that can be easily verified, but the only reason it’s good at those is from the massive amount of stolen data.

        • Bytemeister@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          2 months ago

          Arguably, hammers also have a massive impact on the environment. They are also part of everyday life. Building you live in? Built using a hammer. New sidewalk? Old one came out with an automatic hammer. Car? Bet there was a type of hammer used during assembly. You can’t escape the hammer. Stop running. Accept your inner hammer. Embrace the hammer, become the hammer. Hammer on.

        • But_my_mom_says_im_cool@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          2 months ago

          All those things you said are vague and nebulous and every day people are not gonna understand that message and will just think you’re hysterical or a conspiracy guy. The way the message is put forwards is super important

          • starelfsc2@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            If an everyday person asked me why I don’t like ai I would show them those reasons in more depth, but on lemmy most people have seen the articles about ai water use and the light/noise/water pollution of data centers.

    • TractorDuffy@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      14
      ·
      2 months ago

      t’s the same as any other commercial tool. As long as it’s profitable the owner will continue to sell it, and users who are willing to pay will buy it. You use commercial tools every day that are harmful to the environment. Do you drive? Heat your home? Buy meat, dairy or animal products?

      I honestly don’t know where this hatred for AI comes from; it feels like a trend that people jump onto because they want to be included in something.

      AI is used in radiology to identify birth defects, cancer signs and heart problems. You’re acting like its only use-case is artwork, which isn’t true. You’re welcome to your opinion but you’re welcome to consider other perspectives as well. Ciao!

      • hardcoreufo@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        2 months ago

        The use in radiology is not a good thing. Hospitals are cutting trained technicians and making the few they keep double check more images per day as a backup for AI. If they were just using it as an aide and the humans were still analyzing the same number of picturea that would be fine but capitalism sees a way to save a buck and people will die as a result.

        • ClamDrinker@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          2 months ago

          This isn’t a problem with AI though, it’s a problem with the people cutting trained technicians. In places where such incompetent people don’t decide that, you get the same number of trained technicians accepting (and being a part of) a change that gives them slightly more accurate findings, resulting in lives being saved overall. Which is typically what health workers want to begin with.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        OK so why is AI so big right now because it isn’t profitable. Even there most expensive tier is losing them money. Then you have the data centers getting breaks on electricity so the rest of us cost goes up to make the difference. Where is this magical profitability that is driving AI.

      • ClamDrinker@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        2 months ago

        It’s in part because people aren’t open to contradictions in their world view. In part you can’t blame them for that since everyone has their own valid perspective. But staying willingly ignorant of positives and gray areas is a valid criticism. And sadly there are plenty of influencers peddling a black-white mindset on AI, ignoring all other uses. Not saying intentionally or not, again perspective. I’m sure online content creation has to contend with a lot more AI content compared to the norm. But only on the internet do I encounter rabidly anti AI people, in real life basically nobody cares. Some use it, some don’t, most do so responsibly as a tool. And I work in the creative industry…

        • CreativeShotgun@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          2 months ago

          “I’ve never seen it it must not exist”

          I work in a creative industry too and it is the bane of not only my group but every other company I’ve spoken to. Every artist and musician I know hates it too.

          • ClamDrinker@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            6
            ·
            2 months ago

            I never said it doesn’t exist. I’m sorry people in your area are being negatively affected if so. But the point still stands. My experience is just as valid.

        • SLVRDRGN@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          Even before our current time, “nobody cares” is not a thermostat reading of what “really matters”. It almost sounds like you believe people know what’s best for themselves, when the truth of the matter is that humanity has long proved otherwise.

          • WoodScientist@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 months ago

            You sound like a cartoon supervillain, Lex Luthor ranting to superman about the common animals not knowing what’s best for themselves.

          • ClamDrinker@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 months ago

            I don’t believe that. What I’m saying is that these are all people I work with look very critically and skeptically at the world, as that’s pretty much an inherent requirement for the creative industry. We all know what AI is and what it does, and most arguments against it hold no water to people with a realistic view of the industry to the point it simply cannot be black and white like some claim it to be.

            There are a few good reasons to dislike AI, but those don’t apply to all of AI. Some are value based, and other people have other values that are equally valid. And some can be avoided entirely. Like how you could ship packages with a coal rocket instead of a train on electricity, or just shipping less packages to begin with.

            There is trust and experience between one another in the industry that we aren’t using it unnecessarily, wastefully, and incorrectly, and AI is not anywhere near a requirement by consumers nor healthy minded businesses.

        • hardcoreufo@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          I’m pretty anti AI as it is a tool of the billionaire class to enslave the masses. Look up TESCREAL, its the digital eugenics billionaires and fringe philosiphers believe in and it is the driving force in the AI push.

          That being said I can see a use for a focused, local LLM/AI assistant. I have to search a lot of confidential technical manuals, schematics and trust cases in my job. We are thinking about testing out Ollama to upload all our documents too to make searching them easier.

          • ClamDrinker@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            You are the exact person I didn’t mean 😄 the first is a very valid reason to dislike AI.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          Look up dot com bubble. We still have the internet. Just because AI is over-hyped and in a bubble doesn’t mean it won’t still have uses.

          • ClamDrinker@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 months ago

            I fully agree. I still remember the time when using Photoshop was seen by some as not being “real artist”, because “any idiot with a mouse can draw now”. I’m not under any illusion this will last forever, the negative sentiment is boiling because of the bubble and it’s negative externalities, not by the technology itself. So once that bursts, things will hopefully be a lot more peaceful.

            • Rain World: Slugcat Game@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              machine learning can be useful in limited cases, but is not to be trusted. agentic ai has to go. computers are not creatures, and thinking otherwise is a bad mistake.

          • reksas@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            they have somekind of plan, or maybe its all sunken cost scenario. Either way, they think they can get some benefit from it and they are so determined they are throwing insane amount of money in it even though there is no clear way to get any profit from it. So either they know something we dont or they are desperate to save their investments -> worse ai does, better its for all of us since once ai crashes the components stop being wasted on it, less electricity and materials are wasted on datacenters and best of all, all those fucking billionaires lose a lot of money they have invested or at least the investors who thought it good idea to support them lose and maybe dont do it again.

            • merc@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Just because they have a plan doesn’t mean it’s a good one or that it will work.

              AI doesn’t fuel billionaires, it drains their money.

              • reksas@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                yes, but i dont think billionaires are THAT dumb. They see some value in it for them that they deem worth the risk of losing all that money. So that is why its even more important that the ai crap fails and continues to drain their money.

                Or maybe i’m underestimating just how much money they have and maybe even all this is just akin to losing a large portion but it doesnt matter because they can just exploit everything else… But, if they get what they want then its bad for all of us no matter what.

        • merc@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Everyone’s losing money on the deal, it’s not like the billionaires are cleverly making money on AI while everyone else is losing money.

  • lauha@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    2 months ago

    Do I love my 4-year-old? Yes

    Would I let my precocious 4-year-old full of imagination write my business report? Fuck no. Are you stupid or what?

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 months ago

        If you’ve ever worked with consultants or managers in general, like 50-75% of them are fucking stupid. Just because they can convince other idiots that they’re not, doesn’t mean they aren’t. I’ve watched the blind lead the blind into financial ruin, while getting paid big bucks to do it.

        • nlgranger@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 months ago

          I don’t believe the people who contract them are being duped though. They do it to delegate and dilute the chain of responsibility until their decisions become acceptable.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            I think it’s a bit of both. Sometimes the people hiring them are truly clueless. The kinds of reports that management consultants make seem really well thought out and intelligent. Other times, upper management wants to make a big decision, and they think it’s the right one, but they need something to show they considered all the alternatives and that an outside source agrees with them.

            Also, management consultants are very stupid, but they’re clever in a very narrow area. That’s why they succeed with upper management, because like LLMs, upper managers think they’re clever.

        • StinkyFingerItchyBum@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          As a former consultant and manager, I wholeheartedly agree, but your % is too low. The culture of consulting is poison and makes monsters out of people.

  • fenrasulfr@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    2 months ago

    The fact is though the average person is starting to replace their search engine with chatgpt, gemini, grok or whatever other llm and I have seen more and more small association using generative ai to make their posters instead of working with artist or doing it themselves.

    • barryamelton@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 months ago

      A good blogpost on this: The Enclosure feedback loop

      When everybody uses AI to search, it becomes a closed system that holds all info. Doesn’t need to be productive, but it gatekeeps the knowledge that was free on the internet. It’s a self-reinforcing loop.

    • Tangentism@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      I work in infrastructure and what’s concerning is that younger guys are skipping learning to script to automate processes and instead just getting slop from LLMs that they have no idea what it’s doing.

      Some have also relegated learning problem solving to it as well so when things go wrong, they’re clueless without it.

    • NocturnalMorning@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      Google started making their search engine worse and always pushing things that they thought would make them money. It’s not surprising people are trying something else.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    7
    ·
    2 months ago

    People around me use AI all the time to get answers to generalized topics. More and more they use it like a search engine / information augmentation system.

    They are not technical people. They mostly know that the information needs to be double checked and might be wrong. But usually take it at face value if the importance is low.

    Honestly this is about what they did before. They would search Google, click on the first blog, skim it, and repeat until getting some answer they believe.

    I too use AI regularly for brainstorming, quickly summarizing massive text messages, and reformatting text from a jumbled mess into something more cohesive, etc.

    I don’t love it or hate it. In some cases it saves a lot of time and is useful tool. In other cases it outputs trash that we cannot use for any serious case.

    Just like a hammer or a shovel, it’s a tool. Can be used the right way and it can be used the wrong way.

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      It can be helpful for quickly summarizing a vast body of knowledge or a highly complex topic, to get a general overview and see which strings to pull further, as long as you don’t take everything at face value and understand that you still need to pull those strings yourself in order to acquire an understanding.

      Like, if I suddenly wanted to learn computer programming, I wouldn’t know where to start. But querying an LLM can give me a general idea, define a few key terms and explain the difference between related concepts, without me having to browse through a hundred different tech blogs to answer all my questions in terms I can understand.

      But I wouldn’t suddenly think I’m a computer programmer after doing that. I would have a better idea of where to start learning. I would be able to decide whether to focus first on object-oriented programming or functional programming, static or dynamic typing, declarative or imperative syntax, etc., instead of getting overwhelmed from the start just trying to learn the differences between those concepts.

      It can also suggest resources for further learning, books or websites written by humans, links to open-source software that does what I’m trying to do, etc.

      I wouldn’t expect it to write code for me, but it can be an efficient aid to self-learning and show me what programs and libraries to use for my intended purpose.

      Or for astrophysics, for example. I wouldn’t expect it to give me an accurate breakdown of the engineering specs required to build a pair of O’Reilly cylinders at a Lagrange point, but it can suggest software for rendering prototypes or for simulating the forces that need to be accounted for.

      That wouldn’t make me an astrophysicist, but it’s kind of cool that you don’t need to be one to learn about this stuff and tinker around in a field that’s so vast and technical as to be otherwise prohibitive for non-experts.

      It also depends on the LLM of course. I think Mistral and Lumo are generally pretty okay at doing what I described above. Their algorithms aren’t corrupted by american venture capital, at least, so they have more incentive to give you an accurate response rather than being sycophantic and hugboxing.

    • aln@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      9
      ·
      2 months ago

      I’m sorry, but all the use cases you listed show that you’re just lazy. Stop it. It’s embarassing.

      • nucleative@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        2 months ago

        I’m lazy as fuck. I want to solve problems in the easiest way humanly possible. With the least amount of effort output.

        What about you? Do you take the hard way?

        • howrar@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          2 months ago

          Do you not cross reference multiple archived news articles and seek out past attendees to remind yourself of what Britney Spears wore at her last concert? smh

        • aln@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          6
          ·
          2 months ago

          I’ll be real with you, I typed lazy but wanted to type idiot. Read your fucking emails Jesus Christ. You still have to check all the shit generative AI writes because it lies constantly. It’s very nature does not understand what it’s generating.

          • nucleative@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            2 months ago

            Hard to tell if you’re trolling or trying to add value to the conversation and just missing it.

            A hammer doesn’t know what it is building but it is still useful.

            This is the nature of tools: for some they improve output, for some they don’t.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              2 months ago

              Everyone’s a god damn tool philosopher.

              Personally, I’m fine with banning cigarettes regardless of how responsibly my dead grandpa may have used them.

          • howrar@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            2 months ago

            Obviously, don’t rely on them to read important emails for you. But so many things don’t need additional checking. We’ve all done at least a decade of schooling. We all know basic math, science, and history. When we forget things, all it takes is a small reminder to get it back. Our brains are capable of recognizing whether we’ve seen something before or not. We’re also capable of reasoning to determine whether something we read is consistent with everything else we know.

            So many other things are also so unimportant that it doesn’t matter at all if you’re wrong. For example, some actor looks familiar, it lies to you about what film they were in, and you believe it. Is your life any worse off for it?

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              it lies to you about what film they were in, and you believe it. Is your life any worse off for it?

              I think a better question is: why, then, am I asking it questions?

              If I had a friend I knew was a notorious liar, I would—big chess move—simply stop asking him who actors are. Unless it was really funny.

              • howrar@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                If it’s a liar that lies every time or most of the time, then yeah, don’t bother.

                why […] am I asking it questions?

                I can’t actually think of any specific scenario where something is unimportant enough to not matter but important enough that you’d ask. What I was originally thinking of were actually scenarios where I planned to verify the information at a later time, but I mistook that in my head as not verifying it.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      2 months ago

      The research/tinkerer community overwhelmingly agrees. They were making fun of Tech Bros before chatbots blew up.

  • foliumcreations@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    2 months ago

    I have made the conscious decision to try and not refer to it as AI, but predictive LLM or generative mimic models, to better reflect what they are. If we all manage to change our vernacular, perhaps we can make them silgtly less attractive to use for everything. Some might even feel less inclined to brag about using them for all their work.

    Other options might be unethical guessing machines, deceptive echo models, or the classic from Wh40k Abominable Intelligence.

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      2 months ago

      I mostly agree. Machine Learning is AI, and LLMs are trained with a specific form of Machine Learning. It would be more accurate to say LLMs are created with AI, but themselves are just a static predictive model.

      And people also need to realize that “AI” doesn’t mean sentient or conscious. It’s just a really complex computer algorithm. Even AGI won’t be sentient, it would only mimic sentiency.

      And LLMs will never evolve into AGI, any more than the Broca’s and Wernicke’s areas can be adapted to replace the prefrontal cortex, the cingulate gyrus, or the vagus nerve.

      Tangent on the nature of consciousness:

      The nature of consciousness is philosophically contentious, but science doesn’t really have any answers there either. The “Best Guess™” is that consciousness is an emergent property of neural activity, but unfortunately that leads to the delusion that “If we can just program enough bits into an algorithm, it will become conscious.” And venture capitalists are milking that assumption for all it’s worth.

      The human brain isn’t merely electrical though, it’s electrochemical. It’s pretty foolish to write off the entire chemical aspect of the brain’s physiology and just assume that the electrical impulses are all that matter. The fact is, we don’t know what’s responsible for the property of consciousness. We don’t even know why humans are conscious rather than just being mindless automatons encased in meat.

      Yes, the brain can detect light and color, temperature and pressure, pleasure and pain, proprioception, sound vibrations, aromatic volatile gasses and particles, chemical signals perceived as tastes, other chemical signals perceived as emotions, etc… But why do we perceive what the brain detects? Why is there even an us to perceive it? That’s unanswerable.

      Furthermore, where are “we” even located? In the brainstem? The frontal cortex? The corpus callosum? The amygdala or hippocampus? The pineal or pituitary gland? The occipital, parietal, or temporal lobe? Are “we” distributed throughout the whole system? If so, does that include the spinal cord and peripheral nervous system?

      Where is the center of the “self” responsible for the perception of “selfhood” and “self-awareness”?

      Until science can answer that, there is no path to artificial sentiency, and the closest approximation we have to an explanation for our own sentiency is simply Cogito Ergo Sum: I only know that I am sentient, because if I wasn’t then I wouldn’t be able to question my own sentiency and be aware of the fact that I am questioning it.

      Why digital circuits will never be conscious:

      The human brain has about 14 billion neurons. The average commercial API-based LLM already has about 150 billion parameters, and with FP32 architecture that’s already 4 bytes per parameter. If all it takes is a complex enough system of digits, it would have already worked.

      It’s just as likely that consciousness doesn’t emerge from electrochemical interactions, but is an inherent property of them. If every electron was conscious of its whirring around, we wouldn’t know the difference. Perhaps when enough of them are concerted together in a common effort, their simple form of consciousness “pools together” to form a more complex, unitary consciousness just like drops of water in a bucket form one pool of water. But that’s just pure speculation. And so is emergent consciousness theory. The difference is that consciousness as a property rather than an effect would explain why it seems to emerge from complex enough systems.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        It’s just a really complex computer algorithm

        Not particularly complex. An LLM is:

        $P_\theta(x) = \prod_t \text{softmax}(f_\theta(x_{<t}))$

        where $f$ is a deep Transformer trained by maximum likelihood.

        • wonderingwanderer@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          That “deep Transformer trained by maximum likelihood” is the complex part.

          Billions of parameters in a tensor field distributed over a dozen or more layers, each layer divided by hidden sizes, and multiple attention heads per hidden size. Every parameter’s weight is algorithmically adjusted during training. For every query a matrix multiplication is done on multiple vectors to approximate the relevancy between each token. Possibly tens of thousands of tokens being stored in cached memory at a time, each one being analyzed relative to each other.

          And for standard architecture, each parameter requires four bytes of memory. Even 8-bit quantization requires one byte per parameter. That’s 12-24 GB RAM for a model considered small, in the most efficient format that’s still even remotely practical.

          Deep transformers are not simple systems, if they were then it wouldn’t take such an enormous amount of resources to fully train them.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            The technical implementation, computational effort and sheer volume of training data is astounding.

            But that doesn’t change the fact that the algorithm is pretty simple. Deepseek is about 1,400 lines of code across 5 .py files

          • maplesaga@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            You’re really breaking the shitting on AI vibe when you make it sound like the height of human capacity and ingenuity. Can I just call it slop and go back to eating glue?

            • wonderingwanderer@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              You can still shit on AI, just because it’s computationally complex doesn’t make it the greatest thing ever. It still has a lot of problems. In fact, one of the main problems is its consumption of resources (water, electricity, RAM, etc.) due to its computational complexity.

              I’m not defending AI companies, I just think characterizing LLMs as “simple” is misleading.

              • maplesaga@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                Our whole economy is geared to consume resources, we have inflation targeting to prevent aggregate demand and prices from ever falling. If you want to lower consumption need hard currency, the cheap cash that the AI’s are riding on now is most likely still Covid stimulus and QE.

                • wonderingwanderer@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  And speculation. Venture capitalists think they can create money by investing betting money that they predict they’ll have in the future. It’s how this circular ponzi scheme between Nvidia and OpenAI is holding itself up for now.

                  Those huge numbers that they count in their net worth don’t really exist. It’s money that’s been pledged by a different company based on money they pledged to that company in the first place. It’s speculation all the way down.

                  They’re hoping for a pay-off, but it’s a bubble of sunken costs kicking the can down the road for as long as they can before it bursts.

    • cloudy1999@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      “Asking one’s chat bot” sounds so much less impressive than “leveraging AI”. Using the right language throws some cold water on the corporate narrative.

    • mechoman444@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      The men of iron are so freaking cool! They’re still around in modern 40k hiding biding their time.

      Maybe one day we’ll have a whole new army of AIs in 40k!

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    2 months ago

    That was December 2024.

    McKinsey & Company consulting firm has agreed to pay $650 million to settle a federal investigation into its work to help opioids manufacturer Purdue Pharma boost the sales of the highly addictive drug OxyContin, according to court papers filed in Virginia on Friday.

    Drug dealer must sell drugs.

  • phx@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    It’s the same as the crypto-blockchain-NFT bullshit. A bunch of idiots with too much money put down on it, then when it doesn’t become the hit they expect they start with the propaganda about how it’s the greatest thing, and then when THAT fails they just take away other choices or try to cram it into everything anyhow

    • Jyek@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      The problem is that the propaganda is working. Despite what this meme implies, many many people do use and like AI chat bots and in my line of work, I am asked nearly daily which AI is the best to use and how users can have their own AI that answers emails or mocks up ideas or how it can make their daily job easier. I’m the wrong person to ask that to but I understand why they’re asking me. I’m their IT guy. I don’t particularly care if you use AI in your job because my job is just to make sure your computer keeps working.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    The mudslide of AI slop on YouTube is like digital gangrene, the brainrot has gone down the stem into the organs. We’re done as a species.