Twitter enforces strict restrictions against external parties using its data for AI training, yet it freely utilizes data created by others for similar purposes.
What if they are using OpenAI API and they dont have a model of their own?
Elon be like: “What if I got OpenAI’s chatbot and disguised it as mine?”
Delightfully
devilishdisruptive, Elon.
The irony of calling it Grok.
Removed by mod
Just scambots eating scams scammed by other scammers, all the way down.
Mistral Dolphin 2.1 said the same to me once. They use GPT-4 for the reinforcement so they don’t have to pay humans, and that sentence must slip in there more than they bother to check.
I can buy that this was accidental because that answer is way less direct/relevant that what ChatGPT would provide. The guy asked for malicious code and Grok described how to not get malicious code.
And then he asks if there’s a policy preventing Grok from doing that, and Grok answers with a policy that prevents ChatGPT from providing malicious code. Seems pretty consistently wrong.
AI is looking like the biggest bubble in tech history and stuff like this really ain’t helping.
I think you’re underestimating how much AI is already used in enterprise. It’s got enormous potential and any tech company ignoring it is just shooting themselves in the foot. ChatGPT isn’t the only type of AI.
ChatGPT is the kind of AI that is hyped now. Other kinds of AI (formerly statistics) have been used for decades. Oh you can learn the parameters of some function from the data? Must be AI.
I’m don’t think I am.
The internet had a ton of legitimate and potential users too, but that didn’t prevent the dot com bubble from bursting.
Not only is AI built on a shakey house of cards of stolen IP and unlicensed writing, artwork, music and other data, but there are also way too many players in the space and an amount of investment that, in my opinion, goes way beyond the reality of what AI can achieve.
Whether AI is a bubble or not has more to do with the hype economy around it than the technology itself.
E-discovery and market simulation tools have basically been using these sort of models for a long time. I think “AI” is a misnomer and more of a branding/marketing term, reserved for the latest iteration of these tools, what used to be called “AI” is given a generic term describing it’s use, and the new thing becomes AI until the next significant improvements are made and it happens again.
The thing where people think these new language models are going to create a “real” artificial intelligence basically confirms this, it’s almost a religious belief. The mythology around this iteration of “AI” is creating hype beyond what it’s technically capable of.
Internet is important and useful, but it didn’t stop the .com bubble from being a thing