Nvidia and Infineon Partner to Overhaul AI Data Center Power with High-Voltage Solution

October 16, 2025


In a significant move to address the escalating energy demands of artificial intelligence, Nvidia and Infineon’s Power & Sensor Systems division are partnering to overhaul the outdated power architecture of AI data centers. The collaboration aims to replace the inefficient "spaghetti cluster" of power cables with a centralized, high-voltage DC power setup, offering a single, robust high-voltage cable as a modern alternative.


The urgency for this upgrade stems from the explosive growth in power consumption. With GPUs now consuming more than 1 kW of power per chip, the amount of power required for a single rack has skyrocketed. According to Infineon, racks have jumped from an average of 120 kilowatts to 500 kilowatts in just a few years and are expected to demand more than one megawatt before 2030. This growing power burden has led to an increasing rate of power failure, pushing existing infrastructure to its limits. The current fix of adding numerous power supplies to a single rack only compounds the issue by consuming valuable space, generating excessive heat, and increasing the number of potential points of failure.


The proposed solution, announced by Nvidia at Computex 2025, is an 800 Volt direct current (VDC) power architecture. This new backbone is designed as a much-needed replacement for the overwhelmed 54 Volt systems currently in use, which are increasingly prone to failure under the strain of advanced AI processors. The approach involves converting power directly at the GPU on the server board. Infineon states this should squeeze more reliability and efficiency out of the system while better managing the immense heat generated.


Industry experts see the logic in this power overhaul. "This makes sense with the power needs of AI and how it is growing," said Alvin Nguyen, a senior analyst with Forrester Research. "This helps mitigate power losses seen from lower voltage and AC systems, reduces the need for materials like copper for wiring and bus bars, and offers better reliability and serviceability." Infineon confirms that a shift to a centralized 800 VDC architecture allows for reduced power losses, higher efficiency, and improved reliability, though it does require new power conversion solutions and enhanced safety mechanisms to prevent potential hazards and costly server downtimes.


Adam White, division president of Power & Sensor Systems at Infineon Technologies, emphasized the foundational role of power in AI, stating, "There is no AI without power. That’s why we are working with Nvidia on intelligent power systems to meet the power demands of future AI data centers while providing a serviceable architecture that reduces system downtimes to a minimum."


The push for this new standard is gaining rapid industry momentum. Nvidia is making a full-court press, with more than 50 MGX partners gearing up for the transition. This includes ecosystem support for Nvidia Kyber, a rack architecture that connects 576 Rubin Ultra GPUs built to support increasing inference demands. Furthermore, at the recent OCP Global Summit in Germany, over 20 industry partners showcased new silicon, components, and power systems designed for the gigawatt-era data centers that will support the 800-volt direct current future.


SOURCE NetWorkWorld

Read Also
Nscale Secures Landmark $14 Billion AI Infrastructure Deal with Microsoft
Eos Energy Makes $353 Million Headquarters Move to Pittsburgh Following Data Center Deal
Switch Secures $659 Million in Bond Sale to Fuel AI Factory Expansion

Research