Deliver trusted AI with complete flexibility, from rack-scale AI factories to edge and enterprise deployment
TAIPEI, March 17, 2026 /PRNewswire/ — ASUS today unveiled its fully liquid-cooled AI infrastructure at NVIDIA GTC 2026 (booth #421), offering a complete end-to-end solution powered by NVIDIA Platform Vera Rubin. Under the theme Reliable AI, total flexibilityThis customizable framework (from rack-scale AI factories, desktop AI supercomputers, Edge AI solutions to enterprise AI solutions) enables enterprises and cloud providers to build high-performance, energy-efficient, large-scale AI clusters with unparalleled efficiency and significantly reduced PUE and TCO.
As a supplier of NVIDIA GB300 NVL72 And NVIDIA HGX B300 systems, ASUS’ flagship offering is the ASUS AI BRIDGE built on the NVIDIA Vera Rubin Platform – a rack-scale liquid-cooled powerhouse designed for massive AI workloads. Through strategic partnerships with leading cooling component and system suppliers, ASUS offers diverse cooling modalities, tailored thermal solutions and redundancy to meet all business requirements. Proven by global customer success, ASUS offers expert consulting, a broad portfolio of AI and storage solutions, seamless infrastructure deployment, application integration and continuous services, combining scalability and sustainability to drive value and business intelligence.
From infrastructure to implementation: ASUS AI Factory in action
In the foreground is the flagshipXAVR721-E3 built on NVIDIA Vera Rubin NVL72, a 100% liquid-cooled rackmount system. This offers a TDP of up to 227 kW (MaxP) or 187 kW (MaxQ), delivers up to 10x higher performance per watt and is specifically designed for models with billions of parameters and delivers massive AI performance for large-scale AI factories. In partnership with Vertiv, a global leader in critical digital infrastructure, Schneider Electric and other leading suppliers, ASUS delivers comprehensive power and cooling infrastructure designed for performance without limitations, from standard deployments to advanced liquid cooling, ensuring redundancy for every specific need.
Meeting the rigorous requirements of data centers, ASUS is also introducing its latest server series based on NVIDIA HGX Rubin NVL8 systems, featuring eight NVIDIA Rubin GPUs connected via sixth generation NVIDIA NVLink with an integrated bandwidth of 800G per GPU. To facilitate a seamless and cost-effective transition to liquid cooling, ASUS offers two distinct solutions: the XA NR1I-E12Lan innovative hybrid cooling option; and the XA NR1I-E12LRa 100% liquid cooled system. The hybrid-cooled XA NR1I-E12L specifically combines direct-on-chip (D2C) liquid cooling for NVIDIA HGX Rubin NVL8 motherboard with air cooling for dual Intel® Xeon® 6 processors.
The portfolio is further strengthened by high-performance scalable servers like theXANB3I-E12 built on NVIDIA HGX B300 systems to ensure a solution for every demanding AI workload, the ESC8000A-E13X based on NVIDIA MGX integrated with NVIDIA ConnectX-8 SuperNICs for extreme GPU-to-GPU connectivity and ESC8000A-E13P accelerated by NVIDIA RTX PRO 4500 Blackwell Server Edition or NVIDIA RTX PRO 6000 GPU Blackwell Server Editiondelivering breakthrough performance for demanding data processing, AI, video and visual computing workloads in an energy-efficient design.
The tangible impact of the whole ASUS AI Factory the concept is already demonstrated through several successful customer deploymentsOr ASUS ESC8000 Series powered a production line digital twin based onNVIDIA Omniverse libraries and integrated with NVIDIA customizable features multi-camera tracking workflow, enabling remote simulation and significantly reducing deployment risks, managing the entire process for seamless, seamless deployment and maximizing value from day one.
To support these powerful systems and democratize AI development, ASUS has also established a robust data ecosystem by partnering with NVIDIA-Ccertified storage providers, including IBM, DDN, WEKA and VAST Data, to provide scalable and resilient solutions for memory-intensive AI. A full spectrum of storage solutions through block storage-VS320D-RS12, JBOD-VS320D-RS12J, object storage-OJ340A-RS60And software defined systems — ensuring flexibility from edge to cloud, and from enterprise applications to AI and HPC workloads.
Realizing physical AI: full-stack edge AI supercomputer, from development to deployment
As an expert in the field of full-stack edge AI, ASUS establishes a complete ecosystem for physical AI, providing the critical computing power required from initial development to final deployment. The journey begins at the developer’s office with ASUS ExpertCenter Pro ET900N G3a desktop supercomputer powered by the NVIDIA Grace Blackwell Ultra platform. Featuring Nvidia NVLink-C2C interconnected and 748 GB of coherent unified memory, it handles the heavy lifting of training massive models. At its side, the ultrasmall ASUS Ascent GX10 delivers agile, petaflop-scale performance, powered by NVIDIA Grace Blackwell Superchip, ideal for rapid model iteration and scalable edge configurations.
This development feat transitions smoothly into PE3000Na robust inference engine powered by NVIDIA Jetson Thor. Delivering an impressive 2,070 TFLOPS, the PE3000N provides the real-time computing needed for sensor fusion and autonomous navigation. Together, these systems form a unified workflow in which open models such as NVIDIA Cosmos and vision AI libraries Metropolis can perceive, reason, and act effectively in the physical world.
Secure and scalable agentic AI development
To further enhance these capabilities, ASUS Ascent GX10 and ASUS ExpertCenter Pro ET900N G3 enable agentic AI development with NVIDIA NemoClaw. This integration establishes an agent-ready platform for developers to build secure, long-term autonomous agents locally. Leveraging isolated sandbox environments, governed access control, and private on-device inference, ensuring secure, scalable agent workflows for the most demanding enterprise AI applications.
Enterprise AI: ASUS AI Hub with real-time business intelligence
To accelerate enterprise AI, ASUS introduces ASUS AI Huba turnkey on-premises AI platform powered by ESC8000 series servers and powered by open source LLMs such as NVIDIA Nemotron, and Gemma, enabling businesses to create personalized AI assistants, implement RAG-enhanced document intelligence, and maintain complete data sovereignty for security and compliance.
Proven internally by more than 10,000 employees with peak loads exceeding 600 queries per hour, OCR accuracy >80% and efficiency gains >30%, the platform includes domain-specific modules for various applications – including the new ASUS Agent internal business intelligence platform – that enable senior leaders to instantly access critical information on costs, sales, gross margins, factory operations and other key metrics via simple questions and answers in natural language, transforming complex data into immediate and actionable executive data. decision-making power.
ASUS and NVIDIA are also working together on NVIDIA NemoClaw, an open source stack that makes it simpler to run OpenClaw persistent assistants, more securely, with a single command. It installs the NVIDIA OpenShell runtime, a secure environment for running standalone agents and open source templates like NVIDIA Nemotron.
Green computing and sustainability at the heart
Sustainability is a fundamental pillar of the ASUS design philosophy, with green IT innovations integrated into both hardware and software to minimize total cost of ownership and environmental impact. At the hardware level, ASUS servers feature Thermal Radar 2.0, which uses up to 56 sensors to intelligently optimize fan performance, reducing power consumption by up to 36% and saving approximately $29,000 per year in a 1,000-node cluster. This commitment extends to software with theASUS Control Center (ACC) Data Center Editiona unified management platform that improves security and includes automated carbon emissions tracking, providing companies with the tools needed to achieve their critical ESG goals.
ASUS, the AI supercomputing domain expert, provides comprehensive solutions and services, from consultation and deployment to user training and seamless integration via OpenAI-enabled APIs. As businesses navigate the AI era, ASUS flexibly provides a cost-effective, secure, powerful and sustainable path to innovation and intelligent management.
AVAILABILITY AND PRICES
ASUS servers are available worldwide. However, availability of certain other ASUS products is subject to local regulatory requirements. For availability and specific product offerings in your region, please contact your local ASUS representative.
SOURCEASUS




