• @Bell@lemmy.world
    link
    fedilink
    English
    1481 year ago

    Take all you want, it will only take a few hallucinations before no one trusts LLMs to write code or give advice

    • @sramder@lemmy.world
      link
      fedilink
      English
      671 year ago

      […]will only take a few hallucinations before no one trusts LLMs to write code or give advice

      Because none of us have ever blindly pasted some code we got off google and crossed our fingers ;-)

      • Avid Amoeba
        link
        fedilink
        English
        67
        edit-2
        1 year ago

        It’s way easier to figure that out than check ChatGPT hallucinations. There’s usually someone saying why a response in SO is wrong, either in another response or a comment. You can filter most of the garbage right at that point, without having to put it in your codebase and discover that the hard way. You get none of that information with ChatGPT. The data spat out is not equivalent.

        • deweydecibel
          link
          fedilink
          English
          261 year ago

          That’s an important point, and and it ties into the way ChatGPT and other LLMs take advantage of a flaw in the human brain:

          Because it impersonates a human, people are more inherently willing to trust it. To think it’s “smart”. It’s dangerous how people who don’t know any better (and many people that do know better) will defer to it, consciously or unconsciously, as an authority and never second guess it.

          And the fact it’s a one on one conversation, no comment sections, no one else looking at the responses to call them out as bullshit, the user just won’t second guess it.

      • @Hackerman_uwu@lemmy.world
        link
        fedilink
        English
        41 year ago

        When you paste that code you do it in your private IDE, in a dev environment and you test it thoroughly before handing it off to the next person to test before it goes to production.

        Hitting up ChatPPT for the answer to a question that you then vomit out in a meeting as if it’s knowledge is totally different.

        • @sramder@lemmy.world
          link
          fedilink
          English
          21 year ago

          Which is why I used the former as an example and not the latter.

          I’m not trying to make a general case for AI generated code here… just poking fun at the notion that a few errors will put people off using it.

      • @Seasm0ke@lemmy.world
        link
        fedilink
        English
        21 year ago

        Split segment of data without pii to staging database, test pasted script, completely rewrite script over the next three hours.

      • @Cubes@lemm.ee
        link
        fedilink
        English
        41 year ago

        If you use LLMs in your professional work, you’re crazy

        Eh, we use copilot at work and it can be pretty helpful. You should always check and understand any code you commit to any project, so if you just blindly paste flawed code (like with stack overflow,) that’s kind of on you for not understanding what you’re doing.

        • @Spedwell@lemmy.world
          link
          fedilink
          English
          21 year ago

          The issue on the copyright front is the same kind of professional standards and professional ethics that should stop you from just outright copying open-source code into your application. It may be very small portions of code, and you may never get caught, but you simply don’t do that. If you wouldn’t steal a function from a copyleft open-source project, you wouldn’t use that function when copilot suggests it. Idk if copilot has added license tracing yet (been a while since I used it), but absent that feature you are entirely blind to the extent which it’s output is infringing on licenses. That’s huge legal liability to your employer, and an ethical coinflip.


          Regarding understanding of code, you’re right. You have to own what you submit into the codebase.

          The drawback/risks of using LLMs or copilot are more to do with the fact it generates the likely code, which means it’s statistically biased to generate whatever common and unnoticeable bugged logic exists in the average github repo it trained on. It will at some point give you code you read and say “yep, looks right to me” and then actually has a subtle buffer overflow issue, or actually fails in an edge case, because in a way that is just unnoticeable enough.

          And you can make the argument that it’s your responsibility to find that (it is). But I’ve seen some examples thrown around on twitter of just slightly bugged loops; I’ve seen examples of it replicated known vulnerabilities; and we have that package name fiasco in the that first article above.

          If I ask myself would I definitely have caught that? the answer is only a maybe. If it replicates a vulnerability that existed in open-source code for years before it was noticed, do you really trust yourself to identify that the moment copilot suggests it to you?

          I guess it all depends on stakes too. If you’re generating buggy JavaScript who cares.

      • @Amanduh@lemm.ee
        link
        fedilink
        English
        41 year ago

        Yeah but if you’re not feeding it protected code and just asking simple questions for libraries etc then it’s good

      • @Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        21 year ago

        I feel like it had to cause an actual disaster with assets getting destroyed to become part of common knowledge (like the challenger shuttle or something).

    • @kibiz0r@midwest.social
      link
      fedilink
      English
      91 year ago

      The quality really doesn’t matter.

      If they manage to strip any concept of authenticity, ownership or obligation from the entirety of human output and stick it behind a paywall, that’s pretty much the whole ball game.

      If we decide later that this is actually a really bullshit deal – that they get everything for free and then sell it back to us – then they’ll surely get some sort of grandfather clause because “Whoops, we already did it!”

    • @antihumanitarian@lemmy.world
      link
      fedilink
      English
      71 year ago

      Have you tried recent models? They’re not perfect no, but they can usually get you most of the way there if not all the way. If you know how to structure the problem and prompt, granted.

    • capital
      link
      fedilink
      English
      4
      edit-2
      1 year ago

      People keep saying this but it’s just wrong.

      Maybe I haven’t tried the language you have but it’s pretty damn good at code.

      Granted, whatever it puts out needs to be tested and possibly edited but that’s the same thing we had to do with Stack Overflow answers.

      • @CeeBee@lemmy.world
        link
        fedilink
        English
        171 year ago

        I’ve tried a lot of scenarios and languages with various LLMs. The biggest takeaway I have is that AI can get you started on something or help you solve some issues. I’ve generally found that anything beyond a block or two of code becomes useless. The more it generates the more weirdness starts popping up, or it outright hallucinates.

        For example, today I used an LLM to help me tighten up an incredibly verbose bit of code. Today was just not my day and I knew there was a cleaner way of doing it, but it just wasn’t coming to me. A quick “make this cleaner: <code>” and I was back to the rest of the code.

        This is what LLMs are currently good for. They are just another tool like tab completion or code linting

      • @VirtualOdour@sh.itjust.works
        link
        fedilink
        English
        21 year ago

        I use it all the time and it’s brilliant when you put in the basic effort to learn how to use it effectively.

        It’s allowing me and other open source devs to increase the scope and speed of our contributions, just talking through problems is invaluable. Greedy selfish people wanting to destroy things that help so many is exactly the rolling coal mentality - fuck everyone else I don’t want the world to change around me! Makes me so despondent about the future of humanity.