Truer words have never been said.
Idk if it’s the biggest problem, but it’s probably top three.
Other problems could include:
- Power usage
- Adding noise to our communication channels
- AGI fears if you buy that (I don’t personally)
Dead Internet theory has never been a bigger threat. I believe that’s the number one danger - endless quantities of advertising and spam shoved down our throats from every possible direction.
We’re pretty close to it, most videos on YouTube and websites that exist are purely just for some advertiser to pay that person for a review or recommendation
Power usage probably won’t be a major issue; the main take-home message of the Deepseek brouhaha is that training and inference can be much more efficiently than we had thought (our estimates had been based on well-funded Western companies that didn’t have to bother with optimization).
AI spam is an annoyance, but it’s not really AI-specific but the continuation of a trend; the Internet was already drowning in human-created slop before LLMs came along. At some point, we will probably all have to rely on AI tools to filter it out. This isn’t something that can be unwound, any more than you can undo computers being able to play chess well.
The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.
I mean, it’s our work the result should belong to the people.
This is where “universal basic income” comes into play
More broadly, I would expect UBI to trigger a golden age of invention and artistic creation because a lot of people would love to spend their time just creating new stuff without the need to monetise it but can’t under the current system, and even if a lot of that would be shit or crazily niche, the more people doing it and the freer they are to do it, the more really special and amazing stuff will be created.
Two intrinsic problems with the current implementations of AI is that they are insanely resource-intensive and require huge training sets. Neither of those is directly a problem of ownership or control, though both favor larger players with more money.
And a third intrinsic problem is that the current models with infinite training data have been proven to never approach human language capability, from papers written by OpenAI in 2020 and Deepmind in 2022, and also a paper by Stanford which proposes AI simply have no emergent behavior and only convergent behavior.
So yeah. Lots of problems.
While I completely agree with you, that is the one thing that could change with just one thing going right for one of all the groups that work on just that problem.
It’s what happens after that that’s really scary, probably. Perhaps we all go into some utopian AI driven future, but I highly doubt that’s even possible.
If gigantic amounts of capital weren’t available, then the focus would be on improving the models so they don’t need GPU farms running off nuclear reactors plus the sum total of all posts on the Internet ever.
Like Sam Altman who invests in Prospera, a private “Start-up City” in Honduras where the board of directors pick and choose which laws apply to them!
The switch to Techno-Feudalism is progressing far too much for my liking.
Same as always. There is no technology capitalism can’t corrupt
AI has a vibrant open source scene and is definitely not owned by a few people.
A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It’s a shame to see so many actually cheering them on.
So long as there are big players releasing open weights models, which is true for the foreseeable future, I don’t think this is a big problem. Once those weights are released, they’re free forever, and anyone can fine-tune based on them, or use them to bootstrap new models by distillation or synthetic RL data generation.
I’d say the biggest problem with AI is that it’s being treated as a tool to displace workers, but there is no system in place to make sure that that “value” (I’m not convinced commercial AI has done anything valuable) created by AI is redistributed to the workers that it has displaced.
Welcome to every technological advancement ever applied to the workforce
The system in place is “open weights” models. These AI companies don’t have a huge head start on the publicly available software, and if the value is there for a corporation, most any savvy solo engineer can slap together something similar.
For some reason the megacorps have got LLMs on the brain, and they’re the worst “AI” I’ve seen. There are other types of AI that are actually impressive, but the “writes a thing that looks like it might be the answer” machine is way less useful than they think it is.
most LLM’s for chat, pictures and clips are magical and amazing. For about 4 - 8 hours of fiddling then they lose all entertainment value.
As for practical use, the things can’t do math so they’re useless at work. I write better Emails on my own so I can’t imagine being so lazy and socially inept that I need help writing an email asking for tech support or outlining an audit report. Sometimes the web summaries save me from clicking a result, but I usually do anyway because the things are so prone to very convincing halucinations, so yeah, utterly useless in their current state.
I usually get some angsty reply when I say this by some techbro-AI-cultist-singularity-head who starts whinging how it’s reshaped their entire lives, but in some deep niche way that is completely irrelevant to the average working adult.
I have also talked to way too many delusional maniacs who are literally planning for the day an Artificial Super Intelligence is created and the whole world becomes like Star Trek and they personally will become wealthy and have all their needs met. They think this is going to happen within the next 5 years.
The delusional maniacs are going to be surprised when they ask the Super AI “how do we solve global warming?” and the answer is “build lots of solar, wind, and storage, and change infrastructure in cities to support walking, biking, and public transportation”.
Which is the answer they will get right before sending the AI back for “repairs.”
As we saw with Grock already several times.
They absolutely adore AI, it makes them feel in-touch with the world and able to feel validated, since all it is is a validation machine. They don’t care if it’s right or accurate or even remotely neutral, they want a biased fantasy crafting system that paints terrible pictures of Donald Trump all ripped and oiled riding on a tank and they want the AI to say “Look what you made! What a good boy! You did SO good!”
AI business is owned by a tiny group of technobros, who have no concern for what they have to do to get the results they want (“fuck the copyright, especially fuck the natural resources”) who want to be personally seen as the saviours of humanity (despite not being the ones who invented and implemented the actual tech) and, like all big wig biz boys, they want all the money.
I don’t have problems with AI tech in the principle, but I hate the current business direction and what the AI business encourages people to do and use the tech for.
Well I’m on board for fuck intellectual property. If openai doesn’t publish the weights then all their datacenter get visited by the killdozer
The government likes concentrated ownership because then it has only a few phonecalls to make if it wants its bidding done (be it censorship, manipulation, partisan political chicanery, etc)
And it’s easier to manage and track a dozen bribe checks rather than several thousand.
The biggest problem with AI is the damage it’s doing to human culture.
Not solving any of the stated goals at the same time.
It’s a diversion. Its purpose is to divert resources and attention from any real progress in computing.
And those people want to use AI to extract money and to lay off people in order to make more money.
That’s “guns don’t kill people” logic.
Yeah, the AI absolutely is a problem. For those reasons along with it being wrong a lot of the time as well as the ridiculous energy consumption.
The biggest problem with AI is that it’s the brut force solution to complex problems.
Instead of trying to figure out what’s the most power efficient algorithm to do artificial analysis, they just threw more data and power at it.
Besides the fact of how often it’s wrong, by definition, it won’t ever be as accurate nor efficient as doing actual thinking.
It’s the solution you come up with the last day before the project is due cause you know it will technically pass and you’ll get a C.
It’s moronic. Currently, decision makers don’t really understand what to do with AI and how it will realistically evolve in the coming 10-20 years. So it’s getting pushed even into environments with 0-error policies, leading to horrible results and any time savings are completely annihilated by the ensuing error corrections and general troubleshooting. But maybe the latter will just gradually be dropped and customers will be told to just “deal with it,” in the true spirit of enshittification.
No?
Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.
Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.
Yah, I’m an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.
It’s really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.
Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I’m working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it’s the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)
It’s possible to run the big Deepseek model locally for around $15k, not $100k. People have done it with 2x M4 Ultras, or the equivalent.
Though I don’t think it’s a good use of money personally, because the requirements are dropping all the time. We’re starting to see some very promising small models that use a fraction of those resources.
But the people with the money for the hardware are the ones training it to put more money in their pockets. That’s mostly what it’s being trained to do: make rich people richer.
But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.
Why attack the technology if its the rich people you are against and not the technology itself.
It’s not even the people; it’s their actions. If we could figure out how to regulate its use so its profit-generation capacity doesn’t build on itself exponentially at the expense of the fair treatment of others and instead actively proliferate the models that help people, I’m all for it, for the record.
We shouldn’t do anything ever because poors
deleted by creator