ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future::AI for the smart guy?
Article talks about the potential of AI cannibalism were it is now learning from data that it (or other AI) has generated.
Does ChatGPT use modern data I was under the impression that it’s most modern dataset was a few years old
ChatGPT does not use anymore data points, but newer AI models or if ChatGPT gets a new round of training will definitely be influenced by AI works that have arisen the past year.
The real event that initiates the start toward Idiocracy.
Removed by mod
You are using a free version.
ChatGPT4 and free Bing(ChatGPT) uses recent data
It’s getting worse based on the feedback unfortunately, the need for safety and lack of meaningful deliberation towards how AI companies should operate and what should and should not be done has led Sam and co to be indesicive towards doing anything. Alongside the “morality” of the thing being hyjacked has lead to other AI’s performing better… lead by x employees of OpenAI, with actual bound morals and not inherently relying on user input to train future models, this will be the path forward, this will lead to safe and controlled integration.
I guess at the core of this, we are afraid of ourselves. We are afraid that the worste of humanity outpaces the better parts, that the inputs and training aren’t altruistic but are more pointedly “bad” or “wrong”, and thus leading to “harmful”, whether through misinformation, lies, or fabrications.
I hope we find a way to do better. I’m still excited for the future of AI, I mean crap, I’m closer to having a family doctor that’s a robot then I am to a real human doctor.
I guess at the core of this, we are afraid of ourselves. We are afraid that the worste of humanity outpaces the better parts, that the inputs and training aren’t altruistic but are more pointedly “bad” or “wrong”, and thus leading to “harmful”, whether through misinformation, lies, or fabrications.
Is there any reason not to be afraid? I think you could say that Tay was essentially the same idea a few years back and it took like 48 hours loose on the internet for it to spout literal Nazi (1930s-40s German NSDAP) rhetoric. Besides that being a PR disaster - if “AI” is only getting stronger and more integrated into human life and society, that can be pretty problematic.
As long as it continues to do my resumes for me that’s all I need lol.
Removed by mod
They could make it paid only today, and it’d be instantly profitable. Most free users would transition to a free alternative, but the corporate world would easily pay for use. So would some power users. But I’m sure they are making good money with all the API use anyways, the free access is a cheap way to get mass testing and training data.
Back in my day, we used to call ‘prompt engineering’ ‘asking a question’.
They got to have a special termonology because what they do is oh so special. Some AI users act like they’re Louise Banks from the movie Arrival cracking the code to an alien language or something. And I don’t think it’s far fetched to assume they’re often from the same breed who had NFT monkeys as their twitter pfp about 18 months ago.
Blockchain > Crypto > NFTs > LLMs > whatever’s next.
These people will always be sniffing around for the next big thing to oversell and fleece their audience.
when i think of “prompt engineering” i think more of stuff like this paper
Its more than because half the time it doesn’t even answer the question.
When reality catches up to marketing
I feel like it is still too early to talk about “AI cannibalization” or “feedback loops” as that would mean that a big proportion of the training data is AI-generated content itself, against all the rest that could be scraped off the internet or the public domain, I don’t think this is happening yet.
What people might experience instead, and perceive as dumbness, is that given that the datasets used to train AIs cannot really change that much in a short time (unless we wait for another hundred years so humans can produce actual human original content to train the AI again), and as the mathematical models used to build answers based on the datasets are pretty much the same, a person talking with ChatGPT will over time perceive more and more that the answers are built using a “pattern” or a “structure”, aka the model derived from feeding the dataset into the AI training itself.
Just my pennies on this, let’s also consider that is in human nature to be excited for something new that sounds cool, and then to get bored when you got accustomed to it and pushed it to its boundaries.
Surely the rampant server issues are a big part of that.
OpenAI have been shitting the bed over the last 2 weeks with constant technical issues during the workday for the web front end.
Why is it relevant what Peter Yang - Roblox product lead and enthusiastic child labor exploiter - tweets about it? Let me guess he’s a prompt engineer?
error loading comment
deleted by creator
The people who complain about how they no longer can get answers on how to eliminate juice in the style of Hitler are people who are - to be honest - completely missing the point of this revolution.
ChatGPT is the biggest developer productivity booster I have ever seen and I spend so much more time writing valuable code. Less time spent debugging, less time spent reviewing, etc. means more time for development of things that matter.
Each tech company who just saw massive growth over the past 10-15 years have just received a new toy which will multiply their developer’s outputs. There will be a clear difference between companies who manage to do this will and those who won’t.
It’s irrelevant if I can get ChatGPT to write a poem about poop or not. That’s not the goal of this tool.
I’m a developer and have used ChatGPT pretty extensively over the last few months.
Whenever I give it a programming task that’s more complicated than what you would see at a bootcamp “from zero to job in two weeks”, it completely fails, and me babysitting it through fixing all of the issues takes longer than me writing it in the first place.
It definitely got more stupid. I stopped paying for plus because the current GPT4 isn’t much better than the old GPT3.5.
If you check downdetector.com, it’s obvious why they did this. Their infrastructure just couldn’t keep up with the full size models.
I think I’ll get myself a proper GPU so I can run my own LLMs without worrying that they could stop working for my use case.
GPT4 needs a cluster of around 100 server-grade GPUs that are more than 20k each, I don’t think you have that lying around at home.
I don’t, but a consumer card with 24GB of VRAM can run a model that’s about as powerful as the current GPT3.5 in some use cases.
And you can rent some of that server-grade hardware for a short time to do fine-tuning, which lets you surpass even GPT4 in some niches.
I was talking about it a month ago - others made fun of me… 😂
I think you’ve nailed it though. We are very well versed toward documenting the details or such atrocities; we don’t pay the same tribute to the good done by humanity. And this is certainly evidence that just “letting loose” and AI without clear and static “morals” is a bad idea.