[deleted by user]

  • WolfLink@sh.itjust.works
    link
    fedilink
    arrow-up
    20
    ·
    4 months ago

    I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      I haven’t tried this tool specifically, but I do on occasion ask both Gemini and ChatGPT’s search-connected models to cite sources when claiming stuff and it doesn’t seem to even slightly stop them bullshitting and claiming a source says something that it doesn’t.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          How does having a key solve anything? Its not that the source doesn’t exist, it’s that the source says something different to the LLM’s interpretation of it.

            • skisnow@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              3 months ago

              The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

              Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting

              If it does that more than twice, straight in the bin. I have zero chill any more.

              That’s… not how any of this works…

  • FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    4 months ago

    This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    4 months ago

    This seems astonishingly more useful than the current paradigm, this is genuinely incredible!

    I mean, fellow Autist here, so I guess I am also… biased towards… facts…

    But anyway, … I am currently uh, running on Bazzite.

    I have been using Alpaca so far, and have been successfully running Qwen3 8B through it… your system would address a lot of problems I have had to figurr out my own workarounds for.

    I am guessing this is not available as a flatpak, lol.

    I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        3 months ago

        Oh I entirely believe you.

        Hell hath no wrath like an annoyed high functioning autist.

        I’ve … had my own 6 month black out periods where I came up with something extremely comprehensive and ‘neat’ before.

        Seriously, bootstrapping all this is incredibly impressive.

        I would… hope that you can find collaborators, to keep this thing alive in the event you get into a car accident (metaphorical or literal), or, you know, are completely burnt out after this.

        … but yeah, it is… yet another immensely ironic aspect of being autistic that we’ve been treated and maligned as robots our whole lives, and then when the normies think they’ve actually built the AI from sci fi, no, turns out its basically extremely talented at making up bullshit and fudging the details and being a hypocrite, which… appalls the normies when they have to look into a hyperpowered mirror of themselves.

        And then, of course, to actually fix this, its some random autist no one has ever heard of (apologies if you are famous and i am unaware of this), who is putting in an enormous of effort, that… most likely, will not be widely recognized.

        … fucking normies man.

  • floquant@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    3 months ago

    Holy shit I’m glad to be on the autistic side of the internet.

    Thank you for proving that fucking JSON text files are all you need and not “just a couple billion more parameters bro”

    Awesome work, all the kudos.

  • itkovian@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    4 months ago

    Based AF. Can anyone more knowledgeable explain how it works? I am not able to understand.

  • termaxima@slrpnk.net
    link
    fedilink
    arrow-up
    8
    ·
    3 months ago

    Hallucination is mathematically proven to be unsolvable with LLMs. I don’t deny this may have drastically reduced it, or not, I have no idea.

    But hallucinations will just always be there as long as we use LLMs.

  • recklessengagement@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    3 months ago

    I strongly feel that the best way to improve the useability of LLMs is through better human-written tooling/software. Unfortunately most of the people promoting LLMs are tools themselves and all their software is vibe-coded.

    Thank you for this. I will test it on my local install this weekend.

  • Murdoc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    3 months ago

    I wouldn’t know how to get this going, but I very much enjoyed reading it and your comments and think that it looks like a great project. 👍

    (I mean, as a fellow autist I might be able to hyperfocus on it for a while, but I’m sure that the ADHD would keep me from finishing to go work on something else. 🙃)

  • Terces@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    4 months ago

    Fuck yeah…good job. This is how I would like to see “AI” implemented. Is there some way to attach other data sources? Something like a local hosted wiki?

  • pineapple@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.

    Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.

  • PolarKraken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    This sounds really interesting, I’m looking forward to reading the comments here in detail and looking at the project, might even end up incorporating it into my own!

    I’m working on something that addresses the same problem in a different way, the problem of constraining or delineating the specifically non-deterministic behavior one wants to involve in a complex workflow. Your approach is interesting and has a lot of conceptual overlap with mine, regarding things like strictly defining compliance criteria and rejecting noncompliant outputs, and chaining discrete steps into a packaged kind of “super step” that integrates non-deterministic substeps into a somewhat more deterministic output, etc.

    How involved was it to build it to comply with the OpenAI API format? I haven’t looked into that myself but may.

      • PolarKraken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        The very hardest part of designing software, and especially designing abstractions that aim to streamline use of other tools, is deciding exactly where you draw the line(s) between intended flexibility (user should be able and find it easy to do what they want), and opinionated “do it my way here, and I’ll constrain options for doing otherwise”.

        You have very clear and thoughtful lines drawn here, about where the flexibility starts and ends, and where the opinionated “this is the point of the package/approach, so do it this way” parts are, too.

        Sincerely that’s a big compliment and something I see as a strong signal about your software design instincts. Well done! (I haven’t played with it yet, to be clear, lol)

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    3 months ago

    This is awesome. Ive been working on something similar. Youre not likely to get much useful from here though. Anything AI is by default bad here