OpenAI says its compute increased 15x since 2024, company used 200k GPUs for GPT-5

As company releases its latest generative AI model


OpenAI has shared some details about its growing compute infrastructure alongside the launch of its latest model, GPT-5.


Compute and infrastructure staffer Anuj Saharan said that the company has increased its compute 15-fold since 2024.


On LinkedIn, he added: "[In the] last 60 days: built out 60+ clusters, a backbone moving more traffic than entire continents, and 200k+ GPUs to launch GPT-5 to 700m people - all while designing the next 4.5GW toward superintelligence."


Saharan then pointed to roles at OpenAI's Stargate data center venture: "We're a small team building planetary scale Al infrastructure at unprecedented pace; hiring across data, energy, data centers, capacity planning, biz dev, finance and more. Join us for the next 100x scale-up."


The company is hiring aggressively for the Stargate team, while primarily relying on Oracle for compute.


Last month, the two companies confirmed a 4.5GW deal for Stargate data center capacity in the US, expected to cost some $30 billion a year.


OpenAI is still planning to self-build its own data centers, the company's director of physical infrastructure told DCD in the latest issue of the magazine.


It also plans to develop a Stargate data center campus in the United Arab Emirates with Oracle, Nvidia, Cisco, SoftBank, and G42 involved. Over in Europe, it is working with Nscale on a data center in Norway.


It is searching for sites for other projects across the world.

Read Also
SAMA Expands CPU Cooling Lineup with A60 and A40 Series Air Coolers
Meta taps PIMCO, Blue Owl for $29 billion data center expansion project, source says
Joule, Caterpillar, and Wheeler Announce an Agreement to Power America’s Growing Data Center Energy Needs

Research