• Sequentialsilence@lemmy.world
    link
    fedilink
    arrow-up
    42
    arrow-down
    4
    ·
    1 year ago

    Eh, most of the marketing around ai is complete bullshit, but I do use it on a regular basis for my work. Several years ago it would have just been called machine learning, but it saves me hours every day. Is it a magic bullet that fixes everything? No. But is it a powerful tool that helps speed up the process? Yes.

  • FMT99@lemmy.world
    link
    fedilink
    arrow-up
    63
    arrow-down
    29
    ·
    1 year ago

    Most of the hate is coming from people who don’t really know anything about “AI” (LLM) Which makes sense, companies are marketing dumb gimmicks to people who don’t need them and, after the novelty wore off, aren’t terribly impressed by them.

    But LLMs are absolutely going to be transformational in some areas. And in a few years they may very well become useful and usable as daily drivers on your phone etc, it’s hard to say for sure. But both the hype and the hate are just kneejerk reactionary nonsense for the moment.

    • Rekorse@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      I dont think people want to use AI for artistic reasons. How rewarding is that to tell a machine how to do all the hard parts you can’t do yourself or dont have the patience to do?

      I mean feel free to do whatever of course, but AI cannot make art and someone using AI is not am artist.

    • blazeknave@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I’m completely over taxed mentally, and I offload so much to it from reconciling bank statements and sorting game mods, to a home brew ongoing multiverse starring my son and which emojis to use in notion at work.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      I dabbled a bit in ML before GPT, and when the most recent hype-rocket launched I did a deep dive into LLMs, and I gotta say…

      None of my hopes or horrors regarding “AI” have changed much along the way.

      It’s pretty much the same thing we’ve been doing since the industrial revolution, which is to try to map human behavior onto mechanical processes so that we can optimize for <whatever> from a quantitative, objective frame of reference.

      GenAI is only unique in that it’s an especially mask-off moment for the ruling technocrats. We are destined to become wetware plugins for a capitalist machine whose goal isn’t even as interesting as turning everything into paperclips. It’s worse than a rogue superintelligence.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      I use LLMs to automate the boring parts of my job (programming), it’s literally like outsourcing your work to an intern. I still have to review what is done to make sure it’s correct, but it saves me a ton of time typing up things. If I didn’t have a strong programming background then yeah it probably wouldn’t be as useful to me, but then again you can use it as a learning assistant as well as long as you verify what it is telling you.

  • Matriks404@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    5
    ·
    1 year ago

    I’ve lately tested AI if it can allow me to practice Russian in a natural sounding dialogue. While it didn’t sound 100% human (it was too formal and technical), it was a good practice.

    So I wouldn’t say that it can’t be used for good things.

  • trainsaresexy@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    4
    ·
    1 year ago

    This post isn’t contributing to a healthy environment in this community.

    Well thought out claim -> good source -> good discussion

  • HappyTimeHarry@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    4
    ·
    1 year ago

    LLMs helped me with coding and debugging A LOT. I’d much rather use AI than have to try and parse stack exchange and a bunch of other web forums or developer documentation directly. AI is incredible when i get random errors and paste them in to say “fix this” and it does and tells me HOW and WHY it did what it did.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      4
      ·
      1 year ago

      I keep seeing programmers use this as an example of what LLMs are good for, and I’ve seen other programmers say that the people who do that are bad programmers. The latter makes sense because trusting an LLM to do this is to fundamentally misunderstand what your job is and how the LLM works.

      The LLM can’t tell you HOW or WHY because it doesn’t know those things. It can only give you an approximation of words that sound like someone explaing HOW and WHY. LLMs have no fidelity.

      It could be completely wrong, and you wouldn’t know because you’ve admitted you’re using the LLM instead of reading the documentation and understanding yourself.

      That is so irresponsible. Just RTFM like good programmers have done forever. It’s not that much work if you get into the habit of it. Slow down, take the time to understand HOW and WHY to do things yourself, and make quality code rather than cranking out bigger volumes of crap that you don’t understand. I’m sure it feels very productive in the moment but you’re probably just creating more work for whoever has to clean up your large quantities of poorly thought out code.

    • pedz@lemmy.ca
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      1 year ago

      And it only consumes the equivalent in electricity of what an American house uses for a few tears.

  • unexposedhazard@discuss.tchncs.de
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    I mean the students around me, that would have failed by now without chatgpt probably DO want it. But they dont actually want the consequences that come with it. The academic world will adapt and adjust, kind of like inflation. You can just print more money, but that wont actually make everyone richer long term.

  • ClamDrinker@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    10
    ·
    edit-2
    1 year ago

    Yeah… who doesn’t love moral absolutism… The honest answer to all of these questions is, it depends.

    Are these tools ethical or environmentally sustainable:

    AI doesn’t just exist of LLMs, which are indeed notoriously expensive to train and run. Using an image generator for example can be done on something as simple as a gaming grade GPU. And other AI technologies are already so light weight your phone can handle them. Do we assign the same negativity to gaming even though it’s just people using electricity for entertainment? Producing a game also costs a lot more than it does for an end user to play. It’s all about the balance between the two. And yes, AI technologies should rightfully be criticized for being wasteful, such as implementing it in places that it has no business in, or foregoing becoming more efficient.

    The ethicality of AI is also something that is a deeply nuanced topic that has no clear consensus. Nor does every company that works with AI use it in the same way. Court cases are pending, and none have been conclusive thus far. Implying it is one sided is just incredibly dishonest.

    but do they enable great things that people want?

    This is probably the silliest one of them all, because AI technologies are ground breaking in medical research. They are seemingly pivotal in healing the sick people of tomorrow. And creative AIs allow people who are creative to be more creative. But they are ignored. They are shoved to the side because they don’t fit in the “AI bad” narrative. Even though we should be acknowledging them, and seeing them as the allies they are against big companies trying to hoard AI technology for themselves. It is these companies that produce problematic AI, not the small artists, creatives, researchers, or anyone using AI ethically.

    but are they being made by well meaning people for good reasons?

    Who, exactly? You must realize there are far more parties than Google, Meta and Microsoft that create AI right? Companies and groups you’ve most likely never heard of before, creating open source AI for everyone to benefit from, not just those hoarding it for themselves. It’s just so incredibly narrow minded to assign maliciousness to such a large group of people on the basis of what technology they work with.

    Maybe you’re not being negative enough

    Maybe you are not being open minded enough, or have been blinded by hate. Because this shit isn’t healthy. It’s echo chamber level behaviour. I have a lot more respect for people that don’t like AI, but base it on rational reasons. There’s plenty of genuinely bad things about AI that have to be addressed, but instead you have to find yourself in a divide between people cuddling very close with spreading borderline misinformation to get what they want, and genuine people that simply want their voice and concerns about AI to be heard.

      • ClamDrinker@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        4
        ·
        edit-2
        1 year ago

        For real, it’s what I hate about all of this because infighting pretty much always leads to people being shafted. Even if there are plenty of things to come to agreements about. But this kind of one sided soapboxing is just doing far more harm than good in convincing people.

  • boredsquirrel (he)@slrpnk.net
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    I am on an internship with like really nice people in a company that does sustainable stuff.

    But they honestly have a list of AI tools they plan to use, to make automated presentations… like wtf?

    • trainsaresexy@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Same at my work and it’s because the upper management have tasked middle managers with a way to ‘use AI’. But when the tool solves a business problem it really is fantastic.

      • boredsquirrel (he)@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Yes for sure there are use cases. But there are some things that humans can just do better.

        Presentations? For sure AI will clutter you with pages, add random pictures and make a huge presentation. But why add unauthentic stuff, and bloat other people’s brains?

        Just dont use Pictures if you prefer that

  • ArchRecord@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    1 year ago

    I’ve used LLMs to save me hours of time reformatting text and old notes, and restructure explanations so I can better understand and share them, used AI speech to text models to transcribe my voice notes, and used diffusion models to generate better quality mockups for designs that were later commissioned in better quality, with no need for any changes.

    I can understand not liking AI, or not needing it yourself, but acting as if it has no use is frankly ridiculous. You might not use it, but other people do.

    I think this says more about corporation’s attempts to integrate “AI” into everything, instead of it being a user choice, than it does about the technology itself.

  • Dramaking37@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Most AI is being developed to try to sustain the need for content for social networks. The bots are there to make it feel lived in so they can advertise to you. They are running out of people who are willing to give them free content while they make billions off your art. So then, they just replace the artist.

  • Ok. Been thinking about this and maybe someone can enlighten me. Couldn’t LLMs be used for code breaking and encryption cracking. My thought is language has a cadence. So even if you were to scramble it to hell shouldn’t that cadence be present in the encryption? Couldn’t you feed an LLM a bunch of machine code and train it to take that machine code and look for conversational patterns. Spitting out likely dialogs?

    • projectmoon@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      That would probably be a task for regular machine learning. Plus proper encryption shouldn’t have a discernible pattern in the encrypted bytes. Just blobs of garbage.

    • nova_ad_vitum@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Could there be patterns in ciphers? Sure. But modern cryptography is designed specifically against this. Specifically, it’s designed against there being patterns like the one you said. Modern cryptographic algos that are considered good all have the Avalanche effect baked in as a basic design requirement:

      https://en.m.wikipedia.org/wiki/Avalanche_effect

      Basically, using the same encryption key if you change one character in the input text, the cipher will be completely different . That doesn’t mean there couldn’t possibly be patterns like the one you described, but it makes it very unlikely.

      More to your point, given the number of people playing with LLMs these days, I doubt LLMs have any special ability to find whatever minute, intentionally obfuscated patterns may exist. We would have heard about it by now. Or…maybe we just don’t know about it. But I think the odds are really low .

  • VirtualOdour@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    arrow-down
    5
    ·
    1 year ago

    The answer to all these questions is actually yes but sure invent absurd conspiracy throes if you want to feel smart or whatever, you’re literally the same as the antivaxxers and 5gphobes but whatever i guess, if it makes you feel superior.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      You made three nice points here. It completely derailed the comment to have that bit in the second to last sentence. Are you open to editing that out so your valid arguments can shine through?

      • areyouevenreal@lemm.ee
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        edit-2
        1 year ago

        If you can’t separate a good point from an inflammatory message that’s your problem.

        I am being this inflammatory for a specific reason. The completely unnuanced anti-AI crowd, most of which don’t even understand what AI is, need to be told off. They are uninformed people having opinions. It’s fine to be uninformed, and it’s fine to have opinions, doing both together though is not recommended. It’s definitely not okay to be uninformed about something but confidently state your opinion anyway.

        I might tone it down slightly but honestly these arguments I am done with. Both the AI hype crowds and the anti-AI crowds are almost equally bad and as someone who actually works in this area I am fucking sick of it.

    • Track_Shovel@slrpnk.netM
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Hey.

      Be civil or I’ll ban your ass, faster than stable diffusion can draw dick nipples.

      Edit your comment or I’ll remove it.

      Thanks