• @RustyNova@lemmy.world
    link
    fedilink
    English
    941 year ago

    *bad Devs

    Always look on the official repository. Not just to see if it exists, but also to make sure it isn’t a fake/malicious one

    • db0OP
      link
      fedilink
      English
      191 year ago

      You’d be surprised how well someone who wants to can camouflage their package to look legit.

      • @RustyNova@lemmy.world
        link
        fedilink
        English
        61 year ago

        True. You can’t always be 100% sure. But a quick check for download counts/version count can help. And while searching for it in the repo, you can see other similarly named packages and prevent getting hit by a typo squatter.

        Despite, it’s not just for security. What if the package you’re installing has a big banner in the readme that says “Deprecated and full of security issues”? It’s not a bad package per say, but still something you need to know

      • @KairuByte@lemmy.dbzer0.com
        link
        fedilink
        English
        11 year ago

        Yeah, I’m confused on what the intent of the comment was. Apart from a code review, I don’t understand how someone would be able to tell that a package is fake. Unless they are grabbing it from a. Place with reviews/comments to warn them off.

        • KillingTimeItself
          link
          fedilink
          English
          11 year ago

          the first most obvious sign is multiple indentical packages, appearing to be the same thing, with weird stats and figures.

          And possibly weird sizes. Usually people don’t try hard on package managing software, unless it’s an OS for some reason.

          • @KairuByte@lemmy.dbzer0.com
            link
            fedilink
            English
            11 year ago

            Unless you’re cross checking every package, you’re not going to know that there are multiple packages. And a real package doesn’t necessarily give detailed information on what it does, meaning you can easily mistake real packages as fake when using this as a test.

            The real answer is to not trust AI outputs, but there is no perfect answer to this since those fake packages can easily be put up and sound like real ones with a cursory check.

            • KillingTimeItself
              link
              fedilink
              English
              11 year ago

              depends on how you integrate it i suppose. A system that abstracts that is pretty awful.

              At the very least, you should be weary of there being more than one package, without explicit reason for such.

      • KillingTimeItself
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        we just experienced this with LZMA on debian according to recent reports. 2 years of either manufactured dev history, or one very, very weird episode.

    • db0OP
      link
      fedilink
      English
      641 year ago

      “Hallucinate” is the standard term used to explain the GenAI models coming up with untrue statements

      • Cyrus Draegur
        link
        fedilink
        English
        21
        edit-2
        1 year ago

        in terms of communication utility, it’s also a very accurate term.

        when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

        when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.

        it’s the same song, but played on a different instrument.

        • kronisk
          link
          fedilink
          English
          51 year ago

          when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

          Is it really? You make it sound like this is a proven fact.

          • Cosmic Cleric
            link
            fedilink
            English
            3
            edit-2
            1 year ago

            Is it really? You make it sound like this is a proven fact.

            I believe that’s where the scientific community is moving towards, based on watching this Kyle Hill video.

          • KillingTimeItself
            link
            fedilink
            English
            21 year ago

            i mean, idk about the assumptions part of it, but if you asked a psych or a philosopher, im sure they would agree.

            Or they would disagree and have about 3 pages worth of thoughts to immediately exclaim otherwise they would feel uneasy about their statement.

    • @planish@sh.itjust.works
      link
      fedilink
      English
      91 year ago

      No?

      An anthropomorphic model of the software, wherein you can articulate things like “the software is making up packages”, or “the software mistakenly thinks these packages ought to exist”, is the right level of abstraction for usefully reasoning about software like this. Using that model, you can make predictions about what will happen when you run the software, and you can take actions that will lead to the outcomes you want occurring more often when you run the software.

      If you try to explain what is going on without these concepts, you’re left saying something like “the wrong token is being sampled because the probability of the right one is too low because of several thousand neural network weights being slightly off of where they would have to be to make the right one come out consistently”. Which is true, but not useful.

      The anthropomorphic approach suggests stuff like “yell at the software in all caps to only use python packages that really exist”, and that sort of approach has been found to be effective in practice.

  • @anlumo@lemmy.world
    link
    fedilink
    English
    301 year ago

    I just want an LLM with a reasonable context window so we can actually write real working packages with it.

    The demos look great, but it’s always just around 100 lines of code, which is beginner level. The only use case right now is fake packages.

    • @VirtualOdour@sh.itjust.works
      link
      fedilink
      English
      21 year ago

      I use it for writing functions a lot, tell it the inputs and desired outputs it’ll normally make what i want. Recently gpt has got good at continuing where it left off too.

      • @anlumo@lemmy.world
        link
        fedilink
        English
        11 year ago

        I’m using Codeium for that. Works pretty well as a glorified autocomplete, but not much more. Certainly saves a lot of typing though, but I have to double-check everything it produces, because sometimes it adds subtle errors.

    • @sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      11 year ago

      I’m not particularly interested. Some on my team are playing with it, but I honestly don’t see much point since they spend more time fixing the generated code than they would writing it.

      And I don’t think it’ll ever really work well (in the near-ish future) for the most common type of dev work: fixing bugs and making small changes to existing code.

      It would be awesome if there was some kind of super linter instead. I spend far more time reading code than writing it, so if it can catch bugs, that would be interesting.

      • @anlumo@lemmy.world
        link
        fedilink
        English
        11 year ago

        In my experience with Codeium, it sometimes works ok for three or four lines of code at once. I’ve actually had a few surprises where it nailed what I was going for where I didn’t expect it. But most of the time, it’s just duplicating code from elsewhere in the same file, which usually doesn’t make sense.

        It’s also pretty good for stuff where I’d usually build some exotic regex to search/replace (or do it manually, because it’d take longer to come up with the expression), like transforming an enum into a switch construct for its members, or mapping said enum to a string of the member’s name.

        This is very far from taking over my job, though. I’d love to be more of a conductor than the guy playing all instruments in the orchestra at once.

        • @sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          11 year ago

          To each their own of course. It just seems like the productivity gains are perceptual, not actual.

          For an enum to a switch, I just copy the enum values and run a regex on those copied lines. Both would take me <30s, so it’s a wash. That specific one would be trivial with most IDEs as well, just type “switch (variable) {” and it could autocomplete an exhaustive switch, all without LLMs.

          Then again, I’m pretty old school. I still use vim as my editor (with language server plugins), and I’m really comfortable with those kinds of common tasks. I’m only going to bother learning to use the LLM if it’s really going to help (e.g. automate writing good unit tests).

          • @anlumo@lemmy.world
            link
            fedilink
            English
            11 year ago

            Sometimes those things are way more complex, for example when it’s about matching over a string rather than an enum to convert it into an enum. Typing out a regex would take me maybe 10mins or more, and with the LLM I can just describe roughly what I want (since it knows the language, I don’t have to explain it in detail, just something like “make this into a switch statement” is sufficient usually).

            10mins at a time really adds up over a full work day.

    • @RatBin@lemmy.world
      link
      fedilink
      English
      11 year ago

      I have tried the copilot integration in edge out of curiosity, and if you feed the ai the context of the page the response can be useful. There is a catch, tho:

      • when opening a document the accepted formats are html, txt, pdf. The documentation of a software package can be summarized but thr source will be the context of the page and not a web search, which is good in this casr

      • when generating new information, the model can be far too sintethic, cutting out potentially useful informations.

      I still think you need to read the documentation yourself, maybe using the AI integration only when you need a general idea of the document.

      What I do is first reading the summary of the documebt by bullet point, than reading the pdf file as a whole. By the time I do so, the LLM has given enough of a structure to facilitate my readings…

  • Cosmic Cleric
    link
    fedilink
    English
    51 year ago

    From the article…

    hallucinated software packages – package names invented by generative AI models, presumably during project development

  • Flying Squid
    link
    fedilink
    English
    31 year ago

    It’s 2024. No more quality control, no more double-checking, not in any industry at this point. We’re all alpha testers. Not even beta testers.

    As the old entertainment industry adage goes when anything goes wrong on the set, “we’ll fix it in post.”

  • KillingTimeItself
    link
    fedilink
    English
    01 year ago

    daily PSA that something like [insert number of packages] are deprecated on shipment of software.

    Thanks guys, very cool.

  • @Railcar8095@lemm.ee
    link
    fedilink
    English
    01 year ago

    I’m honestly starting to get tired about “people confuses advanced chatbot with Jarvis and bad things happen”.

    Specially when it’s shitty/lazy devs that don’t code review.