Switch Integrates Nvidia Omniverse DSX Blueprint into Its EVO AI Data Center Platform
March 16, 2026
In a significant move to streamline the deployment and management of next-generation artificial intelligence infrastructure, data center operator Switch has announced the integration of the Nvidia Omniverse DSX Blueprint into its EVO AI Factory architecture and its proprietary LDC EVO operating system. This collaboration marks a critical step in addressing the immense complexity and scale required by modern AI workloads, which demand unprecedented power densities and sophisticated operational orchestration.
The integration centers on Switch's EVO platform, a data center design unveiled last year capable of supporting power densities of up to 2 megawatts per rack. At the core of this platform is the LDC EVO operating system, which functions as an advanced data center infrastructure management (DCIM) solution. The company states that LDC EVO enables the automation of every system within a facility in near real-time, powered by a continuously updated 3D digital twin.
“LDC EVO is the operating system for Switch’s EVO AI Factory, orchestrating the modular and configurable campus architecture that enables hybrid cooling and supports extreme AI densities,” explained Zia Syed, Chief Technology Officer at Switch. “It’s built to operate every generation of Nvidia reference design, including the Rubin DSX architecture. Leveraging Nvidia Omniverse libraries and OpenUSD for digital twins, we’ve layered in automation workflows and operational intelligence to unify deployments.”
The Nvidia Omniverse DSX Blueprint, announced last year and later detailed as a solution scalable to gigawatt-class AI data centers in September, provides a framework for creating comprehensive digital twins. It aggregates detailed 3D and simulation data representing all aspects of a data center into a single, unified model. This allows operators to design, simulate, and optimize environments for high-density hardware like Nvidia's DGX systems before physical deployment, aiming to significantly accelerate rollout timelines and ensure compatibility.
Nvidia emphasized the necessity of such advanced operational layers for the future of AI infrastructure. “Gigawatt-scale AI factories require a shift toward autonomous, telemetry-driven infrastructure capable of orchestrating extreme power and cooling densities in real time,” said Vladimir Troy, Vice President of AI Infrastructure at Nvidia. “The integration of the Nvidia Omniverse DSX blueprint into the Switch LDC EVO operating system provides the high-fidelity simulation and operational intelligence necessary to optimize the deployment of next-generation Nvidia AI infrastructure.”
The partnership signifies a deepening convergence between physical data center design and digital management platforms. By embedding Nvidia's simulation blueprint directly into its operational system, Switch aims to provide customers with a unified toolset for managing the entire lifecycle of AI-optimized data centers—from initial design and capacity planning to real-time, autonomous operations at massive scale.
Source: datacenterdynamics