According to Jiwei.com, the 2022 Taipei International Computer Exhibition (COMPUTEX 2022) was grandly held on the 24th. Global flagship manufacturers including Nvidia will serve as an excellent stage for releasing new products and technical applications at COMPUTEX TAIPEI. During

2025/06/2608:16:35 hotcomm 1086

Jiwei.com News, the 2022 Taipei International Computer Exhibition (COMPUTEX 2022) was grandly held on the 24th. Global flagship manufacturers including Nvidia will serve as an excellent stage for releasing new products and technical applications at COMPUTEX TAIPEI. During the exhibition, Nvidia announced new achievements in the fields of CPU, GPU and data center servers.

Liquid Cooling GPU

Liquid Cooling technology was born in the era of mainframes and is becoming increasingly mature in the era of AI. Today, liquid cooling technology has been widely used in high-speed supercomputers around the world in the form of direct-to-Chip cooling. Switching all CPU servers running AI and HPCs around the world to GPU acceleration systems can save up to 11 trillion watt-hours of energy per year. The energy savings can be used for more than 1.5 million homes in one year.

According to Jiwei.com, the 2022 Taipei International Computer Exhibition (COMPUTEX 2022) was grandly held on the 24th. Global flagship manufacturers including Nvidia will serve as an excellent stage for releasing new products and technical applications at COMPUTEX TAIPEI. During - DayDayNews

At COMPUTEX TAIPEI, Nvidia released the first data center PCIe GPU to adopt direct-to-Chip cooling technology.

Equinix, a global service provider, manages more than 240 data centers, and is currently verifying the application of Nvidia A100 80GB PCIe liquid-cooled GPU in its data centers. The GPU has now entered trial phase and is expected to be officially released this summer.

In separate tests, Equinix and Nvidia both found that data center workloads with liquid cooling technology could be the same as air-cooled facilities while consuming about 30% less energy. Nvidia estimates that the power efficiency of liquid-cooled data centers (PUE, PUE, is an industry metric that measures how much energy used in data centers are directly used for computing tasks) could reach 1.15, which is much lower than the air-cooled PUE 1.6. Under the same space conditions, the liquid-cooled data center can achieve double the calculation volume.

In Asia, Europe and the United States, regulations on setting energy efficiency standards have not yet been determined. This has also pushed banks and other large data center operators to join the liquid cooling technology evaluation team. The scope of use of liquid cooling technology is not limited to data centers, and automobiles and other systems also need to use this technology to cool high-performance systems in closed spaces.

At least a dozen system manufacturers plan to use liquid-cooled GPUs in their products later this year, including ASUS, ASRock Rack, Foxconn Industrial Internet, GIGABYTE, H3C, Inspur, Inventec, Nettrix, Vanda Technology (QCT), Supermicro, Wiwynn and xFusion.

Nvidia plans to launch next year's A100 PCIe card with the H100 Tensor Core GPU based on the NVIDIA Hopper architecture. In the near future, NVIDIA plans to apply liquid cooling technology to its own high-performance data center GPUs and NVIDIA HGX platforms.

Grace CPUs have been adopted by several leading manufacturers to design

Nvidia announced that several leading computer manufacturers will release the first batch of systems based on NVIDIA Grace CPU super chips and Grace Hopper super chips, which will be used for various workloads such as digital twins, AI, HPC, cloud graphics and gaming.

At the same time, it is expected that from the first half of 2023, Asus, Foxconn Industrial Internet, Gigabyte Technology, Vanda Technology, Supermicro and Weiying will launch dozens of servers one after another. Grace-based systems will work with x86 and other Arm-based servers to provide customers with a wide range of choices to help their data centers achieve high performance and high efficiency.

NVIDIA’s vice president of super-large and HPC said: “Now there is a new type of data center that is emerging, namely, the AI ​​factory that enables intelligent by processing and mining massive amounts of data. NVIDIA is working with partners to create systems that promote this transformation. New systems based on NVIDIA Grace Super Chip will inject accelerated computing power into new global markets and industries.”

It is understood that the upcoming server is based on four new systems designs using Grace CPU Super Chip and Grace Hopper Super Chip. The above two NVIDIA Grace super chip technologies support various computing-intensive workloads in a variety of system architectures.

Jetson AGX Orin Servers and Devices

More than 30 leading technology partners in the world have released the first NVIDIA Jetson AGX Orin production systems on Computex. These new products come from more than a dozen camera, sensor and hardware suppliers in Taiwan and will be used in edge AI, AIoT, robotics and embedded applications.

The manufacturers of the products released this time include Taiwanese companies in Nvidia's partner network, such as AVA, Linghua, Advantech, Anti-Ying Intelligent Mobile, Xinpuluo, Yuangang Technology, Aixun, Huiyou, Chenyao Technology, Yiyang Technology and Chaoen.

NVIDIA Jetson AGX Orin developer suite has been launched globally since the GTC conference in March. The suite provides 275 trillion computing performance per second.

Nvidia says that more than 1 million developers and more than 6,000 companies are currently building commercial products on NVIDIA Jetson edge AI and robotics computing platforms to create and deploy autonomous machines and edge AI applications. The Jetson partner ecosystem has more than 150 members, including professional companies in the fields of AI software, hardware and application design services, cameras, sensors and peripherals, developer tools, and development systems, so it can provide full support.

The new Jetson AGX Orin production module brings server-level performance to edge AI. The module will be available in July and the Orin NX module will be available in September.

(Proofreading/Sharon)


hotcomm Category Latest News