• @CodeInvasion@sh.itjust.works
    link
    fedilink
    English
    62 months ago

    Yah, I’m an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

    It’s really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

    Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I’m working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it’s the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)

    • @cyd@lemmy.world
      link
      fedilink
      English
      2
      edit-2
      2 months ago

      It’s possible to run the big Deepseek model locally for around $15k, not $100k. People have done it with 2x M4 Ultras, or the equivalent.

      Though I don’t think it’s a good use of money personally, because the requirements are dropping all the time. We’re starting to see some very promising small models that use a fraction of those resources.