According to OpenAI CEO Sam Altman, the company’s latest model, GPT-4.5, has to be released in phases since it is “out of GPUs.”
In a post on X, Altman described GPT-4.5 as “giant” and “expensive,” and will require “tens of thousands” more GPUs before more people can access ChatGPT. Beginning Thursday, ChatGPT Pro members will be the first to receive GPT-4.5, with ChatGPT Plus users following the following week.
GPT-4.5 is quite costly, possibly because of its massive size. OpenAI charges $150 for every million tokens produced by the model and $75 for each million tokens (around 750,000 words) fed into the model. That is 15 times the output cost and 30 times the input cost of OpenAI’s flagship GPT-4o model.
Altman wrote
“We’ve been growing a lot and are out of GPUs,”. “We will add tens of thousands of GPUs next week and roll it out to the Plus tier then … This isn’t how we want to operate, but it’s hard to perfectly predict growth surges that lead to GPU shortages.”
Altman has previously stated that the company’s goods are being delayed due to a lack of computing resources. By creating its own AI processors and constructing a vast network of data centers, OpenAI intends to fight this in the upcoming years.
Read Also:
- Apple reportedly planning to launch a new event invite feature code-named Confetti
- Google’s AI summaries of search results have harmed Chegg’s traffic and revenue
- OpenAI reveals a newly designed ChatGPT agent for ‘deep research’
- Journalists have been targeted on WhatsApp by Paragon spyware
- Trump expressed the new US sovereign wealth fund can also be purchased from TikTok