Alongside heavy Nvidia usage, and in-house chip development
OpenAI will deploy AMD's upcoming` GPUs as part of its growing compute portfolio.
The generative AI company's CEO, Sam Altman, joined AMD boss Dr. Lisa Su during the chip company's annual AI conference to announce the partnership.
AMD this week announced that its Instinct MI350 series GPUs, consisting of both Instinct MI350X and MI355X offerings, are in production.
The chip designer also previewed Helios, a rack-scale system based on the company’s forthcoming MI400 series of GPUs, the successor to the MI350 series, set to be released in 2026.
“When you first started telling me about the specs, I was like, there’s no way, that just sounds totally crazy,” Altman said in prepared remarks to Su. “It’s gonna be an amazing thing.”
OpenAI said that it has been giving AMD feedback on its MI400 roadmap. AMD plans to significantly undercut Nvidia on prices as it hopes to gain market share from the dominant rival.
The scale of the partnership is unclear, with companies sometimes deploying AMD GPUs in an effort to get more leverage in negotiations with Nvidia. OpenAI has used a number of AMD GPUs for inference on Microsoft Azure since at least last year.
Also this week, Oracle said that it would deploy a cluster of more than 130,000 AMD MI355X GPUs - OpenAI is an Oracle cloud customer, but the end user of the cluster was not disclosed.
OpenAI is one of the largest users of Nvidia GPUs, primarily bought by its compute providers Microsoft, Oracle, and CoreWeave.
With Microsoft developing its own AI chip, Maia, OpenAI previously said that it was giving feedback to that project.
At the same time, OpenAI is building its own AI chips in partnership with Broadcom, and has hired one of the leads of Google's TPUs to head the effort.