Let's analyze the claims of Opencea's claims about the use of Chatgpt power

Altman has just shared the concrete figure and the use of the water of chatgpt questions. According to his blog mail, each question to Cossgt spend about 0.34 WH of electricity (0.00034 kWh) and 0.000088 meters. The equivalent of what is used by a well-efficient lightbulb in a few minutes and is about each of the tablespoon.
This is the first time of Openaai and participate in the public, and add an important data point to ongoing discussions about the environmental impact of the Great AI programs. The announcement stimulated a broad discussion – of support and doubt. In this post I am analyzing the claim and nonmission of Expack on social media to look at the issues on both sides.
What supports 0.34 wh is a claim?
Let us look at the issues borrowed from Opelai's description.
1. Private measurements sync with Openai's number
The important reason some others consider that this figure is capable of synchronization near previous dimensions. In 2025, the Institute Training in Poch.AI measures that one question in GPT-4O spend about 0.0003 kWh – highly equipped with Openaai's Own Estice. This thought that GPT-4o uses the construction of 100 million parameters and ordinary length of response to 500 tokens. However, they do not do other things than the use of power in GPU servers and do not include power forces as it is more.
Recent studies of Jeho- pearl ET al (2025) measures that GPT-4.1 nano uses 0.000454 kWh, O3 uses 0.030 kWh and 7,000 words inputs (approximately 1,000 words input).
The agreement between data and Opelai points indicate that the number of Opelai falls in the fair distance, at least when it focuses on the stage where the model responds to (called “recognition”)
2. The Openaai number may be visible at a hardware level
It is reported that eight billion Opelai servers a day. Let's look at the math after the chatgpt can work that money a day. If this is true, and one question is 0.34 wh, then the daily power can be about 340 hours of megawatt, according to the sector expert. You guess this will mean that the Openai can support chatgted with about 3,200 servers (taking Nvidia DGX A100). If 3,200 servers should handle 1 billion questions, then each server will have to look around 4.5 per second. If we think that one example of Chatgpt's NLM is sent to each server, and that immediate immediate emergency results (approximately 375 the sixth law), servers may need to generate 2,250 tokens per second. Is that?
Stojkovic et al (2024) has been able to fulfill the fulfillment of 6,000 tokens per second from Llama-2-70b on the Nvidi H100 serv at 8 H100 GPUS.
However, Jeggham et al (2025) found that three different openai models produced 75 to 200 tokens per second. However, it is not clear how they get to this.
It seems therefore we cannot refuse the idea that 3,200 servers can handle 1 billion daily questions.
Why some experts doubt
Despite the supportive evidence, many remain monitored or criticized 0.34 wh, he escapes several important concerns. Let's look at those.
1. Openai's number can leave large parts of the program
I suspect the number only includes the power used by GPU servers themselves, not all infrastructure – data storage, healing programs, firewalls, or Backup Systems. This is a common limit to the reporting of the technological companies.
For example, the meta also reported only the power numbers of GPU in the past. But at real data centers, GPU power is part of a full picture.
2. The server ratings appear to be low as industrial reports
Some critics, Green Clogate Burk, say 3,200 GPU servers seem to support all Chatgpt users, especially when you look at the top use of the world, such as codes or photographic).
Some reports suggest that Openai uses tens of thousands or hundreds of thousands of GPU in finding receipt. If that is true, complete energy use can be far more than 0.34 wh / question.
3. The lack of information suggests questions
Critics, eg. David MyTton, and they revealed that Opelai statement does not have the basic context. For example:
- “Normal” question? One question, or complete conversation?
- Is this figure running only in one model (eg GPT-3.5, GPT-4O) or average across?
- Do you include new, complicated activities such as multimodal installation (eg, analyzing pdfs or old pictures)?
- Is the use of the correct water use (used for cooling water suppliers) or indirectly (from electric sources like hydro power)?
- What about carbon releases? That depends largely on the area and power integration.
Apart from these questions, it is difficult to know how to rely on the area or how to compare with other AI programs.
Ideas
Did great technology come to hear our prayers?
Opena's Disclosure comes after the Data Data Release regarding the Approval of GPU, and Google's Blog about their TPU Hardare Health Cycle. This can suggest that companies finally respond to the many calls that are greater. Do we witness the dawn of a new season? Or Sam Altman plays tricks for us because it is at financial interests to reduce the impact of weather for his company? I will leave that question as a student assumptions.
Nops vs training
Historically, the numbers we have seen are limited and reportedly the use of AI is related to the use of the power of the power to prepare a game AI models. And while the training phase can have great potential, in time, work for billions of questions, in fact, can actually use higher power than training the first model. My estimates suggest that GPT-4 training may have been used approximately 50-60 electricity. With 0.34 wh is 1 billion daily questions, energy used to answer user questions that may exceed the use of the training phase after 150-200 days. This pleads to be honest in the thought that the ability to acknowledge is worth measuring.
Conclusion: The first step for acceptance, but away from the full picture
Just as we thought the argument about the use of Openai the power of Openai was old, a famous sealed company resurrected the disclosure of this figure. Many are the fact that Openai is now controversial about the use of energy and water use of their products and hopefully this is the first step in the deduction of the RESCESCEN Tech and the impact of Big Tech climate. On the other hand, many are questioning the number of Opelai. And for good reason. Disclosed as a parentthesis in a blog post on a completely unique topic, and no system provided anything with the above information.
Or we may be testifying that we have a more obvious, we need more details from Openai so we can check their photo of 0.34. Until then, it should not take grain of salt, but by a few.
That's all! I hope you enjoy this story. Let me know what you think!
Follow me to find a lot from AI and sustainable and feel free to follow me to LinkedIn.