[deleted by user]
I hope people in the us wake the fuck up and these companies start losing users to foreign ones by the millions
deleted by creator
Probably a shallow response …
But I always figured AI/LLMs are basically apocalyptic for all sorts of individualistic values in computing (including privacy but also independence and diversity).
Whether they’re good or useful etc, I just struggle to see how they will ever be justifiable against these sorts of values.
Sure, local models and our hardware will get better … but better than the state of the art from the big labs and providers? Given that data and training are the big bottlenecks on quality … I struggle to see how AI isn’t a complete feudal capturing of information computing and processing. Not to mention what happens to the pipeline that produces information content if everyone is only consuming it through the models that train on it.
So for me the big question is, what’s our call on a possible (likely even?) future where we are forever stuck using cloud provided AI along with all of its negatives, in the same way that basically all of us has been and still is stuck using MS windows, Google and the big-social-media hellscape?
For me, I baulk at this.
deleted by creator
Oh I hear you (and appreciate the response).
For me, I can’t help but think of another alternative, which I’m surprised I haven’t heard of yet …
stripping down one’s personal technological cognitive load to a stack of systems that can fit into one’s brain (like the Python mantra), focusing on learning that stack well building sustainable and stable systems, and then just detoxing from the increasingly polluted digital information stream (protected commons, traditional formats such as books and in person engagement … dunno).
Depends on what the end goal is, but AIs seem to be about using tech more or just opting out of sovereignty. Something like the above seems to me to be about using tech less (in the end) and pushing toward being a secondary tool rather than an end of its own.
deleted by creator
Ha yes … on the other hand, it was easy to forget how good damn expansive non-internet information is: the whole world ran on that shit for millennia.
deleted by creator
IMHO LLM usage isn’t coherent with independence. That being said I wrote quite a bit on self-hosting LLMs. There are quite a few tools available, like ollama itself relying on llama.cpp that can both work locally and provide an API compatible replacement to cloud services. As you suggested though typically at home one doesn’t have the hardware, GPUs with 100+GB of VRAM, to run the state of the art. There is a middle ground though between full cloud, API key, closed source vs open source at home on low-end hardware : running STOA open models on cloud. It can be done on any cloud but it’s much easier to start with dedicated hardware and tooling, for that HuggingFace is great but there are multiples.
TL;DR: closed cloud -> models on clouds -> self-hosted provide a better path to independence, including training.
deleted by creator
Being responsible with powerful technology starts with knowing who is using it.
The fuck it does. Claude is already censored - you can’t get a recipe of a poison, schematics for a bomb, an advice on how to hide a body. If you can, then it’s Anthropic engineers didn’t do their job.
Knowing who is using it helps either with conditional censorship, or helping governments to track people based on their prompts, or just plainly lying and using data for analytics and training. All these easons are shit.
And don’t tell me this is to protect the kids again. Let the parents do their job.
Even if the tools are not yet there, “they” want to know exactly who asks for code to things like a DIY radar station or autonomous drone control. We’re well into “first they came” territory.
deleted by creator
I hope I’m wrong too. But I’m pretty pessimistic.
I love the sound of this but can I ask, if the net goes down and you hardly notice, where do you get your ‘net’ from? Or is it that your intranet doesn’t need internet as such and everything is just local?
I might have answered my own question there but I’m interested to understand it a bit more.
Thanks!
deleted by creator
OK I have you. You dont need the internet because you have the internet in your terabyte farm. Pretty cool.
Thanks for the detailed reply.
One final question, I’m sure its dark at the bottom of the deep rabbit hole you are in, what do you do for batteries for your head torch?! 😀
deleted by creator
Thats weird, i don’t remember having an alt account called SuspiciousCarrot78 but surely you must be me, same project, same neurology… same fixation pattern.
deleted by creator
Yep, definitely talking to myself again
Jokes aside, i haven’t seen you mention anything for media streaming.
I highly passionately recommend Navidrome for music. It is my absolute favorite and most used self hosted service.
For acquiring media like film and music depending where you live ripping those from your local library is in some places arguably a protected fair use. (Comes from the time mp3 players became common and runners used to take rented cds in their walkman outside before). In my experience, 480p dvd is much higher quality then internet 480p streams and the total size is much smaller then what you find in downloads.
ARM can help you automatically rip these as long as you have a drive in your pc. I got ARM running in a proxmox lxc with drive passtrough but that honestly was a pain to setup so not sure you should go that exact route, either way the moment arm is functional its smooth sailing and your only concern becomes storage space.
deleted by creator
deleted by creator
Yucky… The identity partner looks nasty. I bet all these corps are linked to the Epstein class and their think tanks. The 1% are taking control of their cattle…
Personally, I would like to use AI, but I don’t due to it being non local. I know there are local AI that could do things, but I don’t know which models are the good one for each task. If someone can give me pointers for it, I’d be grateful, for exemple a good model for local coding :)
deleted by creator
Thanks for the pointers. For the hardware, I have a 9070 XT with 16 Gb of VRAM. It’s sure that it can be very expensive. As I only do this as a hobby, I don’t want to pay that amount of money. I’m okay with having a slow llm as it wouldn’t be a tool I’d use often. I prefer to try doing things on my own and use the ai to help for little tasks first, such as checking why this one line of code didn’t want to work correctly or things like that.




