350,000 servers? Jesus, what a waste of resources.
just capitalist markets allocating resources efficiently where they’re need
It’s a brand new, highly competitive technology and ChatGPT has first mover status with a trailer load of capital behind it. They are going to burn a lot of resources right now to innovate quickly and reduce latency etc If they reach a successful product-market-fit getting costs down will eventually be critical to it actually being a viable product. I imagine they will pipe this back into ChatGPT for some sort of AI-driven scaling solution for their infrastructure.
TL;DR - It’s kind of like how a car uses most of it’s resources going from 0-60 and then efficiencies kick-in at highway speeds.
Regardless I don’t think they will have to worry about being profitable for a while. With the competition heating up I don’t think there is any way they don’t secure another round of funding.
Facebook is trying to burn the forest around OpenAI and other closed models by removing the market for “models” by themselves, by releasing their own freely to the community. A lot of money is already pivoting away towards companies trying to find products that use the AI instead of the AI itself. Unless OpenAI pivots to something more substantial than just providing multimodal prompt completion they’re gonna find themselves without a lot of runway left.
TL;DR - It’s kind of like how a car
Yes. It’s an inefficient and unsustainable con that’s literally destroying the planet.
Are you 14?
If they run out of money (unlikely), they still have a recent history with Microsoft.
Sounds like we’re going to get some killer deals on used hardware in a year or so
Totally not a bubble though.
PLEASE!!!
I do expect them to receive more funding, but I also expect that to be tied to pricing increases. And I feel like that could break their neck.
In my team, we’re doing lots of GenAI use-cases and far too often, it’s a matter of slapping a chatbot interface onto a normal SQL database query, just so we can tell our customers and their bosses that we did something with GenAI, because that’s what they’re receiving funding for. Apart from these user interfaces, we’re hardly solving problems with GenAI.
If the operation costs go up and management starts asking what the pricing for a non-GenAI solution would be like, I expect the answer to be rather devastating for most use-cases.
Like, there’s maybe still a decent niche in that developing a chatbot interface is likely cheaper than a traditional interface, so maybe new projects might start out with a chatbot interface and later get a regular GUI to reduce operation costs. And of course, there is the niche of actual language processing, for which LLMs are genuinely a good tool. But yeah, going to be interesting how many real-world use-cases remain once the hype dies down.
It’s also worth noting that smaller model work fine for these types of use cases, so it might just make sense to run a local model at that point.
Good. It’s fake crap tech that no one needs.
I hope so! I am so sick and tired of AI this and AI that at work.
Ai stands for artificial income.
The start(-up?)[sic] generates up to $2 billion annually from ChatGPT and an additional $ 1 billion from LLM access fees, translating to an approximate total revenue of between $3.5 billion and $4.5 billion annually.
I hope their reporting is better then their math…
Maybe they also added 500M for stuff like Dall-E?
Good point - it guess it could have easily fallen out while being edited, too
I see Scott Steiner has a hold of their calculator…
deleted by creator
Last time a batch of these popped up it was saying they’d be bankrupt in 2024 so I guess they’ve made it to 2025 now. I wonder if we’ll see similar articles again next year.
For anyone doing a serious project, it’s much more cost effective to rent a node and run your own models on it. You can spin them up and down as needed, cache often-used queries, etc.
For sure, and in a lot of use cases you don’t even need a really big model. There are a few niche scenarios where you require a large context that’s not practical to run on your own infrastructure, but in most cases I agree.
I hope not, I use it a lot for quickly programming answers and prototypes and for theory on my actuarial science MBA.
I find you can just run local models for that. For example, I’ve been using gpt4all with a the phind model and it works reasonably well
How much computer power they need? My pc is pretty old :/
If you have a GPU, it should work, but will take longer to generate answers than an online service likely.
I use it all the time for work especially for long documents and formatting technical documentation. It’s all but eliminated my removed work. A lot of people are sour on AI because “it’s not going to deliver on generative AI etc etc” but it doesn’t matter. It’s super useful and we’ve really only scratched the surface of what it can be used for.
I also thing we just need to find use cases where it is working.
While it will not solve everything, it did solve some things. Like you have found, I have used it for generating simple artwork for internal documents, that would never get design funding (even if it would I would have spent much more time dealing with designer), rewriting sentences so it sounds better, grammar check, quick search engine, enciclopedia, copywriting some non important texts…
I would pay few bucks per month if it wasn’t free. I gave it to grammarly and barely use it.
So I guess next step is just reducing cost of running those models, which is not that hard as we can see by open source space.
Is 1) the fact that an LLM can be indistinguishable from your original thought and 2) an MBA (lmfao) supposed to be impressive?
I don’t think that person is bragging, just saying why it’s useful to them