A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.
SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.
Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.
From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.
So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.
A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.
It’s like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.
I think it largely depends on what kind of AI we’re talking about. iOS has had models that let you extract subjects from images for a while now, and that’s pretty nifty. Affinity Photo recently got the same feature. Noise cancellation can also be quite useful.
As for LLMs? Fuck off, honestly. My company apparently pays for MS CoPilot, something I only discovered when the garbage popped up the other day. I wrote a few random sentences for it to fix, and the only thing it managed to consistently do was screw the entire text up. Maybe it doesn’t handle Swedish? I don’t know.
One of the examples I sent to a friend is as follows, but in Swedish;
Microsoft CoPilot is an incredibly poor product. It has a tendency to make up entirely new, nonsensical words, as well as completely mangle the grammar. I really don’t understand why we pay for this. It’s very disappointing.
And CoPilot was like “yeah, let me fix this for you!”
Microsoft CoPilot is a comedy show without a manuscript. It makes up new nonsense words as though were a word-juggler on circus, and the grammar becomes mang like a bulldzer over a lawn. Why do we pay for this? It is buy a ticket to a show where actosorgets their lines. Entredibly disappointing.
Most AIs struggle with languages other than English, unfortunately, I hate how it reinforces the “defaultness” of English
I guess there’s not much non English internet to scrape? I’m always surprised how few social media platforms exist outside of the USA. I went looking because I was curious what discourse online would look like without any Americans talking, and the answer was basically “there aren’t any” outside of shit like 2ch.
There are definitely non american social media platforms and groups and stuff, I’m guessing the same thing keeping you from knowing about them is keeping other americans from knowing about them
Maybe but idk what you mean.
I could however use a list if you felt like making one for some rando online.
deleted by creator
That’s so beautifully illustrative of what the LLM is actually doing behind the curtain! What a mess.
Yeah, it wonks the tokens up.
I actually really like machine learning. It’s been a fun field to follow and play around with for the past decade or so. It’s the corpo-facist BS that’s completely tainted it.
99.999% accurate would be pretty useful. Theres plenty of misinformation without AI. Nothing and nobody will be perfect.
Trouble is they range from 0-95% accurate depending on the topic and given context while being very confident when they’re wrong.
The problem really isn’t the exact percentage, it’s the way it behaves.
It’s trained to never say no. It’s trained to never be unsure. In many cases an answer of “You can’t do that” or “I don’t know how to do that” would be extremely useful. But, instead, it’s like an improv performer always saying “yes, and” then maybe just inventing some bullshit.
I don’t know about you guys, but I frequently end up going down rabbit holes where there are literally zero google results matching what I need. What I’m looking for is so specialized that nobody has taken the time to write up an indexable web page on how to do it. And, that’s fine. So, I have to take a step back and figure it out for myself. No big deal. But, Google’s “helpful” AI will helpfully generate some completely believable bullshit. It’s able to take what I’m searching for and match it to something similar and do some search-and-replace function to make it seem like it would work for me.
I’m knowledgeable enough to know that I can just ignore that AI-generated bullshit, but I’m sure there are a lot of other more
gullibleoptimistic people who will take that AI garbage at face value and waste all kinds of time trying to get it working.To me, the best way to explain LLMs is to say that they’re these absolutely amazing devices that can be used to generate movie props. You’re directing a movie and you want the hero to pull up a legal document submitted to a US federal court? It can generate one in seconds that would take your writers hours. It’s so realistic that you could even have your actors look at it and read from it and it will come across as authentic. It can generate extremely realistic code if you want a hacking scene. It can generate something that looks like a lost Shakespeare play, or an intercept from an alien broadcast, or medical charts that look like exactly what you’d see in a hospital.
But, just like you’d never take a movie prop and try to use it in real life, you should never actually take LLM output at face value. And that’s hard, because it’s so convincing.
I hate that i can no longer trust what comes out of my phone camera to be an accurate representation of reality. I turn off all the AI enhancement stuff but who knows what kind of fuckery is baked into the firmware.
NO, i dont want fake AI depth of field. NO, i do not want fake AI “makeup” fixing my ugly face. NO, i do not want AI deleting tourists in the background of my picture of the eiffel tower.
NO, i do not want AI curating my memories and reality. Sure, my vacation photos have shitty lighting and bad composition. But they are MY photos and MY memories of something i experienced personally. AI should not be “fixing” that for me
“PLEASE use our hilariously power inefficient wrongness machine.”
It is absolutely useless for everyday simple tasks I find.
Who the fuck needs AI to SUMMARIZE an EMAIL, GOOGLE?
IT’S FIVE LINES
Get out of my face Gemini!
Or the shitty notification summary. If someone wrote something to me, then it’s important enough for me to read it. I don’t need 3 bullet points with distorted info from AI.
Yahoo was using their shitty AI tool to summarize emails THEN REPLACE THE FUCKING SUBJECT LINES WITH THE SUMMARY!
It immediately hallucinated raffle winners for a sneaker company and iirc they started getting death threats.
Who the fuck needs AI to SUMMARIZE an EMAIL, GOOGLE?
The executives who don’t do any real work, pretend they do (chiefly to themselves), and make ALL of the purchasing decisions despite again not doing any real work.
AI is useless and I block it anyway I can.
This is what happens when companies prioritize hype over privacy and try to monetize every innovation. Why pay €1,500 for a phone only to have basic AI features? AI should solve real problems, not be a cash grab.
Imagine if AI actually worked for users:
- Show me all settings to block data sharing and maximize privacy.
- Explain how you optimized my battery last week and how much time it saved.
- Automatically silence spam calls without selling my data to third parties.
- Detect and block apps that secretly drain data or access my microphone.
- Automatically organize my photos by topic without uploading them to the cloud.
- Make everything i could do with TASKER with only just saying it in plain words.
Make everything i could do with TASKER with only just saying it in plain words.
Stop, I can only get so hard.
“Stop trying to make
fetchAI happen. It’s not going to happen.”AI is worse that adding no value, it is an actual detriment.
I feel like I’m in those years of You really want a 3d TV, right? Right? 3D is what you’ve been waiting for, right? all over again, but with a different technology.
It will be VR’s turn again next.
I admit I’m really rooting for affordable, real-world, daily-use AR though.
i like 3d, too bad it barely has any content, even back in its time.
I wonder if it has anything to do with the fact that it’s useless.
I don’t think it’s meant to be useful…for us, that is. Just another tool to control and brainwash people. I already see a segment of the population trust corporate AI as an authority figure in their lives. Now imagine kids growing up with AI and never knowing a world without. People who have memories of times before the internet is a good way to relate/empathize, at least I think so.
How could it not be this way? Algorithms trained people. They’re trained to be fed info from the rich and never seek anything out on their own. I’m not really sure if the corps did it on purpose or not, at least at first. Just money pursuit until powerful realizations were made. I look at the declining quality of Google/Youtube search results. As if they’re discouraging seeking out information on your own. Subtly pushing the path of least resistance back to the algorithm or now perhaps a potentially much more sinister “AI” LLM chatbot. Or I’m fucking crazy, you tell me.
Like, we say dead internet. Except…nothing is actually stopping us from ditching corporate internet websites and just go back to smaller privately owned or donation run forums.
Big part of why I’m happy to be here on the newfangled fediverse, even if it hasn’t exploded in popularity at least it has like-minded people, or you wouldn’t be here.
Check out debate boards. Full of morons using ChatGPT to speak for them and they’ll both openly admit it and get mad at you for calling it dehumanizing and disrespectful.
/tinfoil hat
Edit to add more old man yells at clouds(ervers) detail, apologies. Kinda chewing through these complex ideas on the fly.
Good point…
Ai is a waste of time for me; I don’t want it on my phone , I don’t want it on my computer and I block it every time I have the chance. But I might be old fashioned in that I don’t like algorithms recommending anything to me either. I never cared what the all seeing machine has to say.
I do not need it, and I hate how it’s constantly forced upon me.
Current AI feels like the Metaverse. There’s no demand for it or need for it, yet they’re trying their damndest to shove it into anything and everything like it’s a new miracle answer to every problem that doesn’t exist yet.
And all I see it doing is making things worse. People use it to write essays in school; that just makes them dumber because they don’t have to show they understand the topic they’re writing. And considering AI doesn’t exactly have a flawless record when it comes to accuracy, relying on it for anything is just not a good idea currently.
If they write essays with it and the teacher is not checking their actual knowledge, the teacher is at fault, not the AI. AI is literally just a tool, like a pen or a ruler in school. Except much much bigger and much much more useful.
It is extremely important to teach children, how to handle AI properly and responsibly or else they will be fucked in the future.
I agree it is a tool, and they should be taught how to use it properly, but I disagree that is like a pen or a ruler. It’s more like a GPS or Roomba. Yes, they are tools that can make your life easier, but it’s better to learn how to read a map and operate a vacuum or a broom than to be taught to rely on the tool doing the hard work for you.
You are sincerely advocating for teaching how to read a physical map? When will you ever need that ever, without a Zombie apocalypse?
It might be good to teach them this skill additionally, for the sake of brain development. But we should stay in reality and not replace real tools with obsolete ones in education, because children should be prepared for the real world and not for some world, that does not exist (anymore).
Same reason, why I find it ridiculous, how much children are cushioned to the brim and are denied to see the real world for 17 years and ~355 days, in the USA system. As soon as they are 18, they start to see the real world and they are not at all prepared for this surprise.
You are sincerely advocating for teaching how to read a physical map? When will you ever need that ever, without a Zombie apocalypse?
I strongly advocate it, it’s a basic skill. Like simple math, reading and writing, being able to balance a budget, cooking, etc, being able to read a map is a necessary basic skill.
Maps aren’t obsolete. GPS literally works off of the existence maps. Trying to claim maps are obsolete is like saying that cooking food at home is obsolete because you can order delivery.
My kids school just did a survey and part of it included questions about teaching technology with a big focus on the use of AI. My response was “No” full stop. They need to learn how to do traditional research first so that they can spot check the error ridden results generated by AI. Damn it school, get off the bandwagon.
I say this as an education major, and former teacher. That being said, please keep fighting your PTA on this.
We didn’t get actually useful information in high school, partially because our parents didn’t think there was anything wrong with the curriculum.
I’m absolutely certain that there are multiple subjects that you may have skipped out on, if you’d had any idea that civics, shop, home economics, and maybe accounting were going to be the closest classes to “real world skills that all non collegate educated people still need to know.”
I regret not taking shop and home economics. Filing taxes and balancing checkbooks would be good skills to learn also.
I suppose that’s exactly what they should be teaching.
And what exactly is the difference between researching shit sources on plain internet and getting the same shit via an AI, except manually it takes 6 hours and with AI it takes 2 minutes?
I think the fact someone would need to explain this to you makes it pointless to try and explain it to you. I can’t tell whether you’re honestly asking a question or just searching for a debate to attempt to justify your viewpoint.
You’re implicating, there are trusted sources, I am saying, there are no trusted sources whatsoever, and you should equally doubt any source. So, who’s the one not understanding some principle?
A damning result for AI pump and dump scammers.
every NVDA earnings call lol. Old man Jenson had a (chip) farm, AI AI OH! guy literally said AI almost 100 times in a call.
Sounds like corporate right now. Had a meeting earlier and it wasn’t even focused on AI, but I heard it enough times to make my ears bleed.
Do I use Gen AI extensively?…
No but, do I find it useful?..…
Also no.
lol
Unless it can be a legit personal assistant, I’m not actually interested. Companies hyped AI way too much.
seems like they hype to themselves more than the customers, they tried to force feed.
I hate that nowadays AI == LLM/chatbot.
I love the AI classifiers that keep me safe from spam or that help me categorise pictures. I love the AI based translators that allow me to write in virtually any language almost like a real speaker.
What I hate is these super advanced stocastic parrots that manage to pass the Turing test and, so, people assume they think.
I am pretty sure that they asked specifically about LLM/chatbots the percentage of people not caring would be even higher
AI present on Apple and Samsung phones are indeed useless.
They have small language models that summarise notification and rewrite your messages and emails. Those are pretty useless.
Image editing AI that removes unwanted people from your photos have some use.
However top AI tools like deep research, Cursor which millions of developers are using to assist developers with coding are objectively very useful.