MemryX Charts MX4 Path to Data Center with 3D Hybrid-Bonded AI Accelerator

MemryX Unveils Roadmap for Next-Gen AI Accelerator Targeting Data Center Memory Bottlenecks December 26, 2025 As artificial intelligence models grow more complex, the industry's focus is shifting from raw compute power to overcoming the critical "memory wall"—the limitations in capacity, bandwidth, and energy efficiency that now constrain advanced AI workloads. In response, AI hardware startup MemryX Inc. has announced its strategic roadmap for the MX4, a next-generation accelerator designed to bring its efficient "at-memory" architecture from the edge into the data center. The company revealed plans to develop the MX4 around a 3D hybrid-bonded memory architecture, a move aimed at directly addressing the memory bottleneck that plagues current accelerator designs. MemryX, which is already in production with its MX3 chip claiming over 20 times better performance per watt than mainstream GPUs for targeted inference tasks, is now extending this foundation to data center-scale challenges. The announcement aligns with significant industry momentum, highlighted by multibillion-dollar deals like NVIDIA's recent $20 billion agreement with Groq, underscoring the strategic value placed on efficient inference solutions. A key step in the MX4 development is a dedicated test chip program slated for 2026, in partnership with an undisclosed next-generation 3D memory provider. This program aims to validate a targeted ~5-micron-class hybrid-bonded interface, enabling memory to be integrated directly with compute tiles. The MX4 represents a fundamental architectural shift from synchronous designs, utilizing an asynchronous, data-driven model where tiles operate independently. This approach, combined with the direct 3D memory interface, is engineered to eliminate centralized memory controllers and the clocking complexities of scaled designs, thereby managing power and thermal challenges more effectively. The company plans to leverage its mature MX3 software stack, including compiler and runtime, to accelerate adoption for the MX4, even as it supports larger memory footprints. CEO Keith Kressin stated, "By combining our production-proven architecture—including an asynchronous flow model—with 3D hybrid bonding, we are removing the physical barriers to power-efficient trillion-parameter scalability. We aren't just building a faster chip; we are building a more practical roadmap for the future of AI." The roadmap targets first customer sampling in 2027, with a production release in 2028. The MX4 is designed to scale from single-chip systems to multi-chip arrays supporting memory configurations exceeding 1 terabyte, targeting frontier workloads such as Large Action Models (LAMs), high-resolution multimodal vision, and real-time recommendation engines beyond traditional LLMs. MemryX, a fabless semiconductor company backed by $44 million in Series B funding, is headquartered in Ann Arbor, Michigan. Source: prnewswire

Read Also
Lubbock City Council to Decide Fate of Proposed AI-Powered Hyperscale Data Center
McDuffie County Considers Proposal for Major New Data Center Development
Data center issuers get cracking on bumper year with $1.45bn of deals

Research