Sam Altman: ChatGPT queries consume 0.34 watt-hours of electricity and 0.000085 gallons of water

Small amounts on an individual basis, but something that adds up at scale


OpenAI CEO Sam Altman has provided a glimpse into the individual power and water usage of ChatGPT, as the company builds out gigawatts of compute to support its generative AI.


The figure, which has not been peer reviewed, is the company's first official datapoint on the impact of the wildly successful chatbot.


"The average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes," Altman said in a blog post. "It also uses about 0.000085 gallons of water; roughly one-fifteenth of a teaspoon."


There are, of course, caveats. What is classed as an 'average query' is unclear, and it is not known if the more intensive deep research requests are excluded. The impact of large-scale training required to make the model in the first place is not included, nor is the figure representative of what the rapidly evolving model could consume in just a few months.


OpenAI also uses different data centers to serve ChatGPT. Currently, those are from Microsoft Azure - although it has a fleet of different facilities of different ages and cooling types in different locations (with, therefore, differing ambient temperatures and other considerations).


With Stargate and deals with Oracle and Google, that plethora of facility types will expand even more. Similarly, within the data centers, the exact hardware may change as newer GPUs come out, but the older ones remain in use. All of these factors will impact the statistics.


Currently, hundreds of millions of users interact with ChatGPT on a daily basis, Altman said. How many queries they ask on average is unclear.


The broader rush to deploy AI training and inference hardware for OpenAI and its rivals has strained the grid and led to companies embracing gas and other fossil fuels.


In his post, Altman repeated his refrain that - over a longer period - the cost of intelligence should eventually converge to near the cost of electricity as building data centers and compute becomes easier.


For this to happen, he envisions robots making more robots that can themselves deploy data centers.


"If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain - digging and refining minerals, driving trucks, running factories, etc. - to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different," he said.

Read Also
TensorWave deploys AMD Instinct MI355X GPUs in its cloud platform
Southeast Asia’s uneven data center growth amid supply shortage: CBRE
PowerSecure collaborates with Edged to promote sustainable data centers nationwide

Research