But that’s still confusing because we already can. Yeah you might need a little bit more of hardware but… not that crazy. Plus some simpler models can be run with more normal hardware.
Max token windows are 4k for llama 2 tho there’s some fine tunes that push the context up further. Speed is limited by your budget mostly, you can stack GPUs and there are most models available (including the really expensive ones)
I’m just letting you know, If you want something easy, just use ChatGtp. I don’t find them overly expensive for what it is.
But that’s still confusing because we already can. Yeah you might need a little bit more of hardware but… not that crazy. Plus some simpler models can be run with more normal hardware.
Might not be easy to setup that is true.
For large context models the hardware is prohibitively expensive.
I personally use runpod. It doesn’t cost much even for the high end level stuff. Tbh the openai API is easier though and gives mostly better results.
I specifically said “large context” how many tokens can you get through before it goes insanely slow?
Max token windows are 4k for llama 2 tho there’s some fine tunes that push the context up further. Speed is limited by your budget mostly, you can stack GPUs and there are most models available (including the really expensive ones)
I’m just letting you know, If you want something easy, just use ChatGtp. I don’t find them overly expensive for what it is.
you can, but things as good as chatgpt can’t be ran on local hardware yet. My main obstacle is language support other then english