I’ve recently played with the idea of self hosting a LLM. I am aware that it will not reach GPT4 levels, but beeing free from restraining prompts with confidential data is very nice tool for me to have.

Has anyone got experience with this? Any recommendations? I have downloaded the full Reddit dataset so I could retrain the model on this one as selected communities provide immense value and knowledge (hehe this is exactly what reddit, twitter etc. are trying to avoid…)

  • @CeeBee@lemmy.world
    link
    fedilink
    English
    4
    edit-2
    2 years ago

    The best/easiest way to get started with a self-hosted LLM is to check out this repo:

    https://github.com/oobabooga/text-generation-webui

    Its goal is to be the Automatic1111 of text generators, and it does a fair job at it.

    A good model that’s said to rival gpt-3.5 is the new Falcon model. The full sized version is too big to run on a single GPU, but the 7b version “only” needs about 16GB.

    https://huggingface.co/tiiuae/falcon-7b

    There’s also the Wizard-uncensored model that is popular.

    https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored

    There are a ton of models out there with new ones popping up every day. You just need to search around. The oobabooga repo has a few models linked in the readme also.

    Edit: there’s also h20gpt, which seems really promising. I’m going to try it out in the next couple days.

    https://github.com/h2oai/h2ogpt

  • @h3ndrik@feddit.de
    link
    fedilink
    English
    2
    edit-2
    2 years ago

    KoboldCPP works with and without GPU. And is quite easy to install and use. I’d recommend something like that for a beginner.

  • @dorkian_gray@lemmy.world
    link
    fedilink
    English
    12 years ago

    If you want extremely low code, I recommend GPT4All. The prebuilt binaries/exes run locally on CPU and give you a choice of model to use so you can try out a couple to see which you like the best. It’s remarkably quick on my Ryzen 7 3700X, and it doesn’t take long to get a little web server running with Langchain if you want to put in a bit more effort, too.

  • @kozonak@lemmy.world
    link
    fedilink
    English
    02 years ago

    Not sure if youre asking about already trained models or you want to train yours.

    If you just want to have fun the small to medium models are pretty ok. Things like Wizard Vicuna 13b or the smaller 7b. You just have to try some of them until you find ehats best for your use case. Ex I have a model running discord bots (with different personalities) but the same model would work badly with my other projects. Esp considering that with some models you can just chat while others need instructions.

    There are also recent models that approach gpt levels. Downside is they are huge in terms of hardware cost (hundreds of gbs of ram, multiple gpus). But they wont necesarly be better than a small more focused model.

    Get oobabooga (the automatic1111 of chat llms) and then search for TheBloke on huggingface for models.

  • @TheDarkKnight@lemmy.world
    link
    fedilink
    English
    02 years ago

    Honestly all these are great suggestions for today, but this area is moving so fast I almost would suggest holding off six months to a year or so for a better solution to rise to the top. Their capabilities grow daily, and you may put in the work to get this set-up and have a much more capable solution appear soon afterwards. Just a thought though, if it’s mainly for a fun experiment then try some of these out!

  • @Itrytoblenderrender@lemmy.world
    link
    fedilink
    English
    02 years ago

    There is also runpod.io. you can rent quite powerful machines on a hourly base which gives you the possibility to run the large models. Also they have templates so the machine will be set up ready to go in minutes. All you have to do is to load the model you like to try via the oogaboga web interface of your machine.