You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
The best use of AI I’ve seen thus far is reading legislative bills. Those monstrosities are so fucking long and filled with earmarks that it’s next to impossible to understand what is in them.
Having an AI not only read the bill but keep a watch of it as it goes through Congress is probably the best use of AI because it actually helps citizens.
I am on record saying we need an AI that can track prices of various things that can then predict when the best time it is to buy something.
I want an AI bot that saves me money or gets me a good deal or extracts money from the capital class.
Except they can screw up at that role.
There’s a lawsuit because DOGE asked ChatGPT to summarize projects DEI-ness, and for example it declared a grant for fixing air conditioning was a DEI initiative
If you ask for quotes and explanations it would help, i.e. treat the LLM output as a smart index/table of contents. You’d be able to quickly verify claims
As long as you follow through to actually source the original, instead of assuming the quotes provided are intact. The point was in the case above, DOGE was doing no follow up, and most people who look to that as a ‘summary’ assistant aren’t wanting to dig deeper.
Hell, even without AI lawmakers frequently got caught admitting they didn’t read the law they signed, they didn’t have time for that. Now with AI summaries as an excuse…
That’s just general incompetence, lying with statistics for example has been around for a while
It’s a tool, like everything else. It’s easy to google wrong info. You can get wrong info from an encyclopedia.
You can even from a dictionary: One thing that slightly annoys me is the change in the spelling of “yeah” such that “yea” is a common alternate spelling - thanks to autocorrect. “Yea” was a word - it’s archaic these days. If you see someone say “Yay or nay” that was “yea or nay”. “Yea” is not the same meaning as “yes” or “yeah”, although it is somewhat similar.
I remember someone quoting dictionary definitions to me to try and “prove” that “yea” meant the exact same as “yeah” or “yes”.
They were wrong.
But the point is: The tool is just a tool. AI is a tool.
Yea
Also transcribing small town council meetings so that reporters can stay up to date without having to listen to 6 hours of mind numbing nonsense debate about a park bench
I have autism and ADHD, and have been frustrated throughout my entire life by my inability to realize any of my numerous ideas due to double executive dysfunction. While I see many drawbacks from using these models - the most serious one as it currently stands being their water consumption - I’ve come to consider them a very important support tool for people in a similar position as myself.
I hear you. A lot of times my ideas are just a “vibe”, and starting is the hardest part. I haven’t used AI much at all, but I can see how having a prompt to get you started can get the creative ball rolling.
Do check the vlogbros summary of the AI water issue. TLDR: it’s negligible compared to the real water hog (corn), and being managed.
It’s not going away. The cat is out of the bag.
As with any tool it has its use cases. It’s not a good fit for everything. You can drive a screw with a hammer but a screwdriver works best.
We’re experiencing the capitalist euphoria that happens when something new comes along. This needs to get regulated into submission like all the previous bubbles.
Scientific use on your own massive data sets(think 100s of TB) - Sure
Consumer chatbot uses - May give the illusion of positive results, whereas the long-term outcome is an overall negative effect on the user.
LLM’s have their use, there is no doubt about that. I’m in the middle of creating a home brew campaign for my D&D group and unfortunately I’m a lousy artist and I wanted a few things visualized. Well, I used a photo generating AI to create something that had the visual I wanted. I’m going to use it for my campaign and it will probably just sit on my hard drive after I’m done.
My employer is rolling out AI and is asking us to find places to insert it into our workflows. I am doing that with my team, but none of us are really sure if it will be of any benefit.
The problem right now is we’re at the stage where idiots are convinced it is something that it is not and they have literally thrown 10’s of billions of dollars at it. Now… They are staring at the wide abyss that is the amount of money they invested vs the amount of money people are willing to pay for it.
I’ve seen arguments for and against the presence of an AI bubble… Personally, I think it’s a bubble that’s so large that it will take down several long established computer industry manufacturers when if pops. Those that are arguing its absence probably have large investments that they do not want to see fail.
LLMs specifically are great for intermediate use cases. You had a campaign in mind, but needed help with visuals. I was designing a piece of jewelry and had a series of reference images. Fed all those into a VLM and got something closer to my imagination, but still worked with a jeweler to realize the final product.
These tools are best when you have a foundation of knowledge and need a little extra guidance, but fall off when you get to deep expertise. I’ve used them to troubleshoot my server but I already had a basic understanding of how a config should look. I also wouldn’t trust an LLM to properly configure something like crypto for it.
To me, the biggest ethical concerns surround the training and creation of LLMs - stealing artists’ work to train them, energy usage, etc. I suppose in using the models I’m creating ongoing demand for them, so I’m not sure the answer. The best I’ve seen so far is what Anthropic used to espouse, no new frontier models until we can guarantee safety. And I’d throw in “utility”. Train new models when people are actually using them and clamoring for new use cases, not because a bunch of private equity shows line go up.
Literally everything I’ve vibe coded the #1 security feature is local only storage. I trust it naught with security LOL.
No. I want to talk to a living machine mind, not a complexified chatbot controlled entirely by ultrarich techbro overlords.
For sure. You could absolutely create and train a model ethically. It wouldn’t be nearly as useful in many aspects, but it would be gen AI. From an environmental perspective, I guess you could ask yourself the same thing of CPU intensive gaming. People play games for hours using up similar, often more, electricity as a small locally run LLM.
I think it’s gonna fall on its face
Strictly from an environmental perspective, no. This tech generates massive emissions and consumes a large amount of fresh water at a time when both are at critical points. We are going full speed towards a planet inhospitable to human life and the other life we share the planet with.
No, never.
Mostly because it’s illegally trained, a fact that is very often just overlooked. Because you know, there are no other easy options. Don’t let them keep sticking to different rules.
If it truly helps you, I think that might be enough for me. I say truly because you need to use an AI with responsibility to not ruin yourself. Like, don’t let it think for you. Don’t trust everything it says.
I use it a lot when applying for jobs, something I’ve struggled with on and off for 12 years. I suck and writing the cover letter and CV. It takes me 2-3 days to update a cover letter for a job because it takes so much energy. With AI that is down to 1-2 days.
It’s also great for explaning things in other words of if you’re trying to look up something that’s hard to search for, I don’t have any examples tho.
I used to use it to help me formulate scentences since english isn’t my first language. Now I instead use Kagi Translate.
Yeah I use it to break up my ADHD monosentence paragraphs. I’ll tell it to avoid changing my wording (it can add definitions if it thinks the word is super niche or archaic) but mostly break things up into more readable sentences and group / reorder sentences as needed for better conceptual flow. It’s actually a pretty good low level editor.
That’s a great use!
All I ever do with AI is use it to correct my grammar or tone.
I used it this year to write my performance self-review. It successfully turned my usual rambling but valid accomplishments into management friendly synergistic paradigms, saving me the anguish of doing it myself.
Ask programmer bros who work on corporative hell… It’s almost mandatory today if you want to earn money programming.
If you’re in a dev company that doesn’t require AI, it’s just a matter of time.
I think programmers are like 90% responsible for impact on environment due to AI use. I’ve a friend who work on a big company, they use AI literally everywhere you can imagine, even on Slack to answer other colleagues messages. They need to feed huge codebases to provide context to AI, at the end it’s more resource hungry than generating video or images a few times a day.
It’s not ready for commercial use by the general public.
We see this ALL the time in America - a new disruptive technology emerges. We jump all over the benefits and the profits without regard to consequences or expense. We suffer.
New cheap pesticide? Hell yeah, spray that DDT everywhere, it’s super effective! (Insert other endless examples here, from microplastics to asbestos.)
AI (and information technology in general) has shown itself to be a danger to human beings. Its effects are not felt so much in the short term (5 or 10 years) but generationally. We’ve seen that information technology has already impacted quality of life. It’s used as spyware, as a tool to collect and correlate massive amounts of data. It’s used to shape our media experience, our purchasing, our social circles. There are great things, like online banking. But they seem more and more to be outweighed by a loss of humanity. So much misinformation that I question my own reality some days.
What we call “AI” is the evolution of these obtrusive, coercive practices. It exists purely to replace human thinking skills. I’ve spent a bit of time in r/teachers over the last 15 years, and the stories keep getting worse. The rise of AI means that detecting plagiarism/cheating is exponentially more difficult. But, more importantly, the kids don’t have any stress when it comes to cheating. They don’t have to find a friend or know the bare minimum. They can just…cheat. And they never learn to problem solve or overcome adversity.
None of this matters, though. Ready or not, here we are. A new kind of slavery for a new world order.
You raise many good points, but social media also has benefits and is not all just negative. Same with AI and all tech. We are better off overall with tech despite the downsides which we should be doing a better job of mitigating.
despite the downsides which we should be doing a better job of mitigating.
This is the part where I lose faith. We have failed to mitigate the downsides. In fact, we have encouraged the monetization of the downsides.
No one wants this beside for people who lack creativity.
What AI should be doing is learning how to take out my garbage, cut my grass, and do the dishes for me. Not whatever this dystopian bullshit is.











