Inspur Released HBM2 AI Accelerator Card F37X with Xilinx FPGA

On November 14th CST, Inspur and Xilinx announced the launch of Inspur’s F37X, the FPGA AI accelerator card featuring integrated on-chip HBM2. Providing 28.1 INT8 TOPS and 460GB/s bandwidth with less than 75W for typical AI applications, F37X enables high-performance, high-bandwidth, low-latency and low-power consumption AI acceleration.

F37X is a cutting-edge FPGA accelerator card designed by Inspur for the extreme performance demanded by AI workloads. Powered by the Xilinx Virtex UltraScale+ architecture, F37X provides 28.1 INT8 TOPS. The 8GB integrated on-chip HBM2 offers 460GB/s bandwidth. For typical AI applications, F37X only consumes 75W, delivering performance up to 375 GOPS/W. Performance data shows that when applied to real-time inference in image recognition based on GoogLeNet deep learning model with batch size of 1, Inspur F37X is capable of handling 8600 images per second, 40 times that of a CPU. F37X supports the SDAccel development environment with three mainstream programming languages, C/C++, OpenCL and RTL, providing a powerful and extensive set of developer tools. SDAccel enables developers to build outstanding application level programmability for common AI scenarios including machine learning inference, video and image processing, database analysis, and other scenarios in the finance and security sectors. With F37X, users can develop and migrate various customized AI algorithms and applications with high flexibility, dramatically enhancing software development productivity and efficiency.

Freddy Engineer, corporate vice president, global data center sales for Xilinx noted that the Alveo U200 and U250 accelerator cards have been certified and tested on Inspur’s NF5280M5, NF5468M5, GX4 and other AI servers. With different innovative designs in server-board density, interface and other aspects, these servers meet the demands of scenarios such as video transcoding, image recognition, speech recognition, natural language processing, genome sequencing analysis, NFV, big data analytics and query.

Wilson Guo, senior technology director of Inspur said: “Xilinx is the leading global FPGA, Programmable SoC and ACAP solution provider, and Inspur has long been committed to innovative FPGA hardware and software technologies. The companies reach a consensus to drive the application of FPGA technology and accelerating AI computing. Inspur will continue to work with Xilinx to focus on customer demands and enhance FPGA technical cooperation and innovation, realizing ultimate computing acceleration experience to FPGA and AI users worldwide.”

As the world’s leading AI computing provider, Inspur is fully engaged in the development of AI infrastructures on four layers, including computing platform, management and performance suite, optimized deep learning frameworks and application acceleration to deliver end-to-end, agile, cost-efficient, and optimized AI solutions for its industry customers. According to IDC’s 2017 China AI Infrastructure Market Survey Report, with 57% market share, Inspur ranks the first in the AI server market. Meanwhile, committed to offering a state-of-the-art computing edge for global customers through innovative design, Inspur has become a business partner of many leading companies in the world.


Read Also
Post – Pandemic Era: How do International Companies Turn Crisis into Opportunities
Hechuan District (Chongqing City) Energy Big Data Center was established
IDCC2020 Review: Data Center International Cooperation Summit Forum

Research