Introduction to the GPU Shortage
The ongoing GPU shortage has significantly impacted OpenAI’s plans for the release of its latest model, ChatGPT-4.5. Graphics Processing Units (GPUs) are essential for training language models, processing large amounts of data, and executing complex calculations efficiently. Unlike Central Processing Units (CPUs), GPUs excel in performing parallel processing, making them more suitable for AI operations.
Understanding OpenAI’s Current Challenges
OpenAI’s CEO, Sam Altman, recently tweeted about the arrival of hundreds of thousands of new GPU chips, promising that they will be utilized rapidly. The new ChatGPT-4.5 model is currently accessible only to users opting for the $200 per month pro version. However, once the GPU inventory is replenished, ChatGPT Plus subscribers paying $20 monthly will also benefit.
Long-Term Solutions and Model Costs
Dealing with the GPU shortage isn’t just a matter of acquiring chips. OpenAI is exploring the development of its own processors to mitigate dependency on fluctuating inventory levels from suppliers like Nvidia. The costs associated with running ChatGPT-4.5 are notably high, priced at $75 per million input tokens and $150 per million output tokens, a stark contrast to the previous version, GPT-4, which offers significantly lower costs. Altman emphasizes that while ChatGPT-4.5 is not a reasoning model, it introduces a unique type of intelligence that may revolutionize user interactions.