Supermicro Data Center Building Block Solutions Simplify AI Factory Deployment

Solution provides critical infrastructure components for liquid cooled AI factories.
Supermicro unveiled its Data Center Building Block Solutions (DCBBS) this week. According to the company, DCBBS can help companies overcome the complexities of outfitting liquid cooled AI factories with critical infrastructure components, including servers, storage, networking, rack, liquid cooling, software, services, and support. DCBBS provides a standardized, flexible solution architecture for AI data center training and inference workloads, which can enable easier data center planning, buildout, and operation, as well as reduced costs.


“Supermicro's DCBBS enables clients to easily construct data center infrastructure with the fastest time-to-market and time-to-online advantage, deploying as quickly as three months,” said Charles Liang, president and CEO of Supermicro. “With our total solution coverage, including designing data center layouts and network topologies, power and battery backup-units, DCBBS simplifies and accelerates AI data center buildouts leading to reduced costs and improved quality.”


DCBBS offers packages of pre-validated data center-level scalable units, including a 256-node AI Factory DCBBS scalable unit, designed to alleviate the burden of prolonged data center design by providing a streamlined package of floor plans, rack elevations, bill of materials, and more. Supermicro provides comprehensive first-party services, from consultation to on-site deployment and continued on-site support. DCBBS is customizable at the system-level, rack cluster-level, and data center-level.


Along with Supermicro DLC-2 technology, DCBBS helps customers save up to 40% power, a reduction of 60% in the data center footprint, and 40% less water consumption, all of which leads to 20% lower TCO, the company says.


Solutions from Supermicro include up to 256 Liquid Cooled 4U Supermicro NVIDIA HGX system nodes, each system equipped with 8 NVIDIA Blackwell GPUs (2,048 GPUs in total), interconnected with up to 800Gb/s NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum X Ethernet networking platform. The compute fabric is supported by elastically scalable tiered storage with high-performance PCIe Gen5 NVMe, TCO optimized Data Lake nodes, and resilient management system nodes for continuous uninterrupted operation.


The solutions feature a modular building block approach, composed of three hierarchical levels: the system-level, rack-level, and data center level, giving customers multiple design options in determining a system-level bill of materials, down to selecting individual components, including CPUs, GPUs, DIMMs, drives, and NICs. System-level customization ensures the ability to meet specialized hardware requirements for a particular data center workloads and applications and allows for granular fine-tuning of data center resources.


Supermicro aids in designing rack enclosure elevation layouts to ensure optimization for thermals and cabling, giving customers the ability to select the type of rack enclosure, including 42U, 48U, and 52U configurations.


After the initial consultation with the customer, Supermicro delivers a project proposal tailored to a given data center power budget, performance target, or other requirements.


Supermicro's SuperCloud Composer provides a suite of infrastructure management capabilities, with rich analytics that manage compute, storage, and network building blocks at cloud scale.


In addition to services, Supermicro has broad expertise in data center application integration, including AI training, AI inferencing, cluster management, and workload orchestration. This includes supporting customers deploying the NVIDIA AI Enterprise software platform. Supermicro provides full services for software provisioning and validation based on the customer's software stack.


DataVolt Partnership for Green AI Campuses


Supermicro also announced a strategic partnership with DataVolt to build hyperscale AI campuses, initially in Saudi Arabia.


“Supermicro is thrilled to work together in this important effort to deliver significantly enhanced computing power for the next generation of AI infrastructure,” Liang said. “We are excited to collaborate with DataVolt to bring our advanced AI systems featuring the latest direct liquid cooling technology (DLC-2) powered by local renewable, sustainable, and net-zero green technology.”


According to the company, this collaboration will fast-track delivery of Supermicro’s ultra-dense GPU platforms, storage, and rack PnP systems for DataVolt’s hyperscale gigawatt-class renewable and net-zero green AI campuses. Supermicro’s liquid cooling solutions reduce power costs up to 40%, accelerate time-to-deployment and time-to-online, and allow data centers to run more efficiently with lower power usage effectiveness (PUE), according to the company.

The collaboration is subject to negotiation and completion of one or more definitive agreements between the parties. The estimated minimum market value of the products contemplated in the transaction is $20 billion.

Read Also
SAP, DT, Ionos, and Schwarz partner for potential AI data center in Germany
S. Korea’s KT plans to set up AI data center in Vietnam
Oracle to spend $40bn on Nvidia GPUs for OpenAI Texas data center

Research