@RandomlyRight@sh.itjust.works to Lemmy Shitpost@lemmy.world • 10 months agoI'm sorry, little onesh.itjust.worksimagemessage-square7fedilinkarrow-up1267arrow-down116
arrow-up1251arrow-down1imageI'm sorry, little onesh.itjust.works@RandomlyRight@sh.itjust.works to Lemmy Shitpost@lemmy.world • 10 months agomessage-square7fedilink
minus-square@MotoAsh@lemmy.worldlinkfedilink3•10 months agoWhy use commercial graphics accelerators to run a highly limited “AI”-unique work set? There are specific cards made to accelerate machine learning things that are highly potent with far less power draw than 3090’s.
minus-square@mergingapples@lemmy.worldlinkfedilink11•10 months agoBecause those specific cards are fuckloads more expensive.
minus-square@d00ery@lemmy.worldlinkfedilink3•10 months agoWhat are you recommending, I’d be interested in something that’s similar in price to 3090.
minus-square@MotoAsh@lemmy.worldlinkfedilink1•10 months agoEven better, because those are cheap as hell compared to 3090s.
minus-square@GBU_28@lemm.eelinkfedilinkEnglish2•10 months agoHuh? Stuff like llama.cpp really wants a GPU, a 3090 is a great place to start.
Why use commercial graphics accelerators to run a highly limited “AI”-unique work set? There are specific cards made to accelerate machine learning things that are highly potent with far less power draw than 3090’s.
Because those specific cards are fuckloads more expensive.
What are you recommending, I’d be interested in something that’s similar in price to 3090.
It’s for inference, not training.
Even better, because those are cheap as hell compared to 3090s.
But can they run Crysis ?
Huh?
Stuff like llama.cpp really wants a GPU, a 3090 is a great place to start.