US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
I mean, it hasn’t thus far.
I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone’s heads. Basic supply and demand says my skillset will become more valuable.
Someone will need to clean up the ai slop. I’ve already had similar pistons where I was brought into clean up code bases that failed being outsourced.
Ai is simply the next iteration. The problem is always the same business doesn’t know what they really want and need and have no ability to assess what has been delivered.
I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive. AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.
I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.
If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren’t it.
They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.
Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.
New technologies are not the issue. The problem is billionaires will fuck it up because they can’t control their insatiable fucking greed.
exactly. we could very well work less hours with the same pay. we wouldnt be as depressed and angry as we are right now.
we just have to overthrow, what, like 2000 people in a given country?
Just about every major advance in technology like this enhanced the power of the capitalists who owned it and took power away from the workers who were displaced.
Its just going to help industry provide inferior services and make more profit. Like AI doctors.
I agree. Albeit there are some advantages, of course, I am 100% certain that in the aggregate, it will make people more stupid and gullible.
It is sort of obvious when you engage with the thought, and seek it to its natural conclusion:
For once, most Americans are right.
They’re right. What happens to the workers when they’re no longer required? The horses faced a similar issue at the advent of the combustion engine. The solution? Considerably fewer horses.
But as for the people who worked with horses, I’m pretty sure they found different jobs - it’s not like they were sent to a glue factory.
Of course, they learned to code.
And became influencers
So far AI has only aggravated me by interrupting my own online activities.
First thing I do is disable it
I wish it was optional. When I do a search, the AI response is right at the top. If I want AI advice, I’ll go ask AI. I don’t use a search engine to get answers from AI!
I imagine you could filter it with uBlock right?
I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it’s intelligent when it’s really just a more advanced tool like excel compared to pen and paper or an abacus.
The real threat will be people who fool themselves into thinking it’s more than that and that it’s word is law, like a diety. Or worse, the people that do understand that but like various religious and political leaders that used religion to manipulate people, the new AI Pope’s will try and do the same manipulation but with AI.
“I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it’s intelligent.”
So in other words, it will achieve human-level intellect.
Most people in the early 90’s didn’t have or think they needed a computer.
How did those barbarians sit on the toilet without memes to scroll?
That was the job of reader’s digest.
I thought Reader’s Digest was for when the roll ran out.
And if you’re desperate, the back of a shampoo bottle
I need someone to bitch at anonymously too
80’s. 80’s we had apple iis, commodores, tandys, ibm pcs, etc. 90’s it was cell phones
I’m not saying people didn’t have them at all. Majority of families absolutely did not until the very late 90s. Many more people use AI now than had computers back then.
All it took was for us to destroy our economy using it to figure that out!
Maybe that’s because every time a new AI feature rolls out, the product it’s improving gets substantially worse.
Maybe that’s because they’re using AI to replace people, and the AI does a worse job.
Meanwhile, the people are also out of work.
Lose - Lose.
Even if you’re not “out of work”, your work becomes more chaotic and less fulfilling in the name of productivity.
When I started 20 years ago, you could round out a long day with a few hours of mindless data entry or whatever. Not anymore.
A few years ago I could talk to people or maybe even write a nice email communicating a complex topic. Now chatGPT writes the email and I check it.
It’s just shit honestly. I’d rather weave baskets and die at 40 years old of a tooth infection than spend an additional 30 years wallowing in self loathing and despair.
30 years ago I did a few months of 70 hour work weeks, 40 doing data entry in the day, then another 30 stocking grocery shelves in the evening - very different kinds of work and each was kind of a “vacation” from the other. Still got old quick, but it paid off the previous couple of months’ travel / touring with no income.
It didn’t even need to take someone’s job. A summary of an article or paper with hallucinated information isn’t replacing anyone, but it’s definitely making search results worse.
Maybe it’s because the American public are shortsighted idiots who don’t understand the concepts like future outcomes are based on present decisions.
“Everyone else is an idiot but me, I’m the smartest.”
lmao ok guy
60 million Americans just went to the polls 4 months ago homie. It ain’t about me.
Theres a hell of alot more Americans than 60 million.
EST 346.8million according to Gemini and ChatGPT. 😂
Bruh what the fuck are you even on about? AI shouldnt be in everything just because, it needs to be reliable and have a legit need.
🤡
Yeah maybe if your present decisions were smarter you would be even smarter in the future and could agree with his incredibly smart argument. Make better present decisions.
LLM can’t deliver reliably what they promise and AGI based on it won’t happen. So what are you talking about?
Maybe if a service isn’t ready to be used by the public you shouldn’t put it in every product you make.
Shut up nerd
I use it at work side-by-side with searches for debugging app issues.
The first thing seen at the top of WhatsApp now is an AI query bar. Who the fuck needs anything related to AI on WhatsApp?
Android Messages and Facebook Messenger also pushed in AI as ‘something you can chat with’
I’m not here to talk to your fucking chatbot I’m here to talk to my friends and family.
Who the fuck needs
anything related to AI onWhatsApp?Lots of people. I need it because it’s how my clients at work prefer to communicate with me, also how all my family members and friends communicate.
Right?! It’s literally just a messenger, honestly, all I expect from it is that it’s an easy and reliable way of sending messages to my contacts. Anything else is questionable.
There are exactly 0 good reasons to use whatsapp anyways…
Yes, there are. You just have to live in one of the many many countries in the world where the overwhelming majority of the population uses whatsapp as their communication app. Like my country. Where not only friends and family, but also businesses and government entities use WhatsApp as their messaging app. I have at least a couple hundred reasons to use WhatsApp, including all my friends, all my family members, and all my clients at work. Do I like it? Not really. Do I have a choice? No. Just like I don’t have a choice on not using gmail, because that’s the email provider that the company I work for decided to go with.
SMS works fine in any country.
And you can isolate your business requirements from your personal life.