Recently, Nvidia's first quarter financial report for fiscal year 2023 was released, setting a quarterly revenue record of US$8.29 billion, an increase of 46% from the same period last year and an increase of 8% from the previous quarter. Among them, data center and game revenue

2025/06/2608:17:38 hotcomm 1630

A few days ago, Nvidia's first quarter financial report for fiscal year 2023 was released, setting a quarterly revenue record of US$8.29 billion, an increase of 46% from the same period last year and an increase of 8% from the previous quarter. Among them, data center and game revenue both set quarterly records.

At the recent COMPUTEX 2022, Nvidia demonstrated innovation and ecological progress in data centers, robots, edge computing, etc.

's "three cores" layout is about to take shape. What will be used for data centers in the future?

Nvidia released Grace CPU and Grace Hopper superchip at two recent GTC conferences to support various computing-intensive workloads in a variety of system architectures. With the launch of these two super chips in the first half of 2023, Nvidia's CPU+GPU+DPU product portfolio has been fully formed, which will undoubtedly further consolidate Nvidia's position in the accelerated computing field.

Recently, Nvidia's first quarter financial report for fiscal year 2023 was released, setting a quarterly revenue record of US$8.29 billion, an increase of 46% from the same period last year and an increase of 8% from the previous quarter. Among them, data center and game revenue  - DayDayNews

At the same time, the global competition in data centers will also be more interesting. AMD is accelerating the construction and innovation of heterogeneous computing platforms, and Intel's more powerful CPUs, independent GPUs, and new process technologies will also be launched in the next few years. The competition seems to be more complex and intense. How do you view the data center chip market after 2023? How to continue to build competitive advantages?

Huang Renxun told that the friendly companies are all very good, but Nvidia will focus on a very unique development method. He pointed out that people usually think of NVIDIA as a chip company, but in fact, NVIDIA is a vertically integrated artificial intelligence company in many aspects, building a complete software stack, chips, systems, system software and artificial intelligence algorithms, etc., to provide products to the ecosystem in an open way. Whether the user wants to use Nvidia's chips, systems, system software, and AI solutions, Nvidia can provide services to users.

He emphasized that the world of accelerated computing is completely different from CPUs. In the world of CPUs, an incredible magic is called x86, and every software running x86 runs on the CPU. However, there is no x86 for accelerated computing.

"So every accelerated computing system, chip or architecture is NVIDIA's architecture, you have to build your own market," he said. "The applications running on NVIDIA's accelerated computing platform are hard-won because we work very hard to accelerate molecular dynamics, quantum chemistry, reinforcement learning, or RNN, CNN, Transformers, LSTM, etc. This includes many different algorithms, and we have to make it run well on NVIDIA's platform. Our approach is to be a full-scale company, bring value to customers, allow the platform to be open, so that users can integrate our chips and technologies as they want."

Recently, Nvidia's first quarter financial report for fiscal year 2023 was released, setting a quarterly revenue record of US$8.29 billion, an increase of 46% from the same period last year and an increase of 8% from the previous quarter. Among them, data center and game revenue  - DayDayNews

NVIDIA founder and CEO JENXun Huang at Computex Interviewed during 2022

launched liquid-cooled GPU for the first time to create a green data center

In order to curb climate change and build a high-performance, energy-efficient data center, data center operators hope to eliminate chillers used to cool gases in data centers because they evaporate millions of gallons of water each year. With the help of liquid cooling technology, the system only needs to recycle a small amount of liquid in the closed system and can focus on the main heating points.

liquid cooling technology was born in the era of mainframes and is becoming increasingly mature in the era of AI. Today, liquid cooling technology has been widely used in high-speed supercomputers around the world in the form of direct-to-Chip cooling.

Recently, Nvidia released the PCIe GPU, the first data center to adopt direct-to-Chip cooling technology, which is the first among mainstream server GPUs, helping to achieve sustainable and efficient computing.

Nvidia GPU has 20 times more energy efficiency than CPUs in AI inference and high performance computing. There is a data surface. If all CPU servers running AI and HPC around the world are switched to GPU acceleration systems, up to 11 trillion watt-hours of energy can be saved every year, and the energy saved can be used for more than 1.5 million houses for one year.

Equinix is ​​a global service provider with more than 240 data centers managed by it, committed to being the first to achieve climate neutrality in the industry. Equinix is ​​verifying the application of the A100 80GB PCIe liquid-cooled GPU in its data centers, part of the company's comprehensive solution to achieve sustainable cooling and heat capture.The GPU has now entered trial phase and is expected to be officially released this summer.

“This is the first liquid-cooled GPU introduced in our lab and we are excited because our customers are eager to leverage AI in a sustainable way,” said Zac Smith, Equinix Head of Edge Infrastructure.

Power Usage Efficiency (PUE) is used to measure how much energy is used in a data center directly for computing tasks. Currently, data center operators are trying to reduce PUE to an ideal level of nearly 1.0. The Equinix facility currently has an average PUE of 1.48, while its new data center can have a minimum PUE of less than 1.2.

In separate tests, Equinix and Nvidia both found that data center workloads with liquid cooling technology could be the same as air-cooled facilities, while consuming about 30% less energy. Nvidia estimates that the PUE in liquid-cooled data centers may reach 1.15, which is much lower than the air-cooled PUE 1.6.

Under the same space conditions, the liquid-cooled data center can achieve double the calculation amount. This is because the A100 GPU uses only one PCIe slot, while the air-cooled A100 GPU requires two PCIe slots.

Recently, Nvidia's first quarter financial report for fiscal year 2023 was released, setting a quarterly revenue record of US$8.29 billion, an increase of 46% from the same period last year and an increase of 8% from the previous quarter. Among them, data center and game revenue  - DayDayNews

NVIDIA has achieved the effect of saving power and increasing density with the help of liquid cooling technology

It is understood that Nvidia plans to launch next year's A100 PCIe card, which will be equipped with a H100 Tensor Core GPU based on the Hopper architecture. In the near future, NVIDIA plans to apply liquid cooling technology to its own high-performance data center GPUs and NVIDIA HGX platforms.

data center is an important carrier of digital infrastructure. Against the backdrop of the "dual carbon" goal, reducing carbon emissions, energy conservation and emission reduction are the top priorities. This will drive the accelerated application of liquid cooling technology in data centers and drive the growth of demand for liquid cooling data center solutions.

In addition, the scope of use of liquid cooling technology is not limited to data centers, and automobiles and other systems also need to use this technology to cool high-performance systems in closed spaces. Liquid-cooled GPUs can maintain performance while reducing energy consumption. Judging from the current market progress, some system manufacturers have already started their use plans first, and the next step is expected to be implemented in larger-scale applications quickly.

It is understood that at least a dozen system manufacturers plan to use liquid-cooled GPUs in their products later this year, including Asus, ASUS, ASRock Rack, Foxconn Industrial Internet, GIGABYTE, H3C, Inspur, Inventec, Nettrix, Vanda Technology (QCT), Supermicro, Wiwynn and xFusion.

More than one manufacturer released the first batch of system designs based on Grace super chips. New data center demand is emerging, that is, to realize intelligent "AI factory" by processing and mining massive data. New systems based on NVIDIA Grace super chips will inject accelerated computing power into new global markets and industries. Currently, several computer manufacturers have announced that they will release the first batch of systems based on Grace CPU super chips and Grace Hopper super chips, which will be used for various workloads such as digital twins, AI, HPC, cloud graphics and gaming.

is expected to launch dozens of servers starting from the first half of 2023. Asus, Foxconn Industrial Internet, Gigabyte Technology, Vanda Technology, Supermicro and Weiying will successively launch dozens of servers. Grace-based systems will provide a wide range of choices along with x86 and Arm-based servers, helping their data centers achieve high performance and high efficiency.

is based on Grace CPU and Grace Hopper super chip. The upcoming server has four types of system designs:

NVIDIA HGX Grace Hopper system: used for AI training, inference and HPC, equipped with Grace Hopper super chip and BlueField-3 DPU.

NVIDIA HGX Grace system: used for HPC and supercomputing, adopts a pure CPU design, equipped with Grace CPU super chip and BlueField-3.

NVIDIA OVX system: for digital twin and collaborative workloads, equipped with Grace CPU super chip, BlueField-3 and NVIDIA GPU.

NVIDIA CGX system: used in cloud graphics and gaming, equipped with Grace CPU super chip, BlueField-3 and NVIDIA A16 GPU.

Grace CPU and Grace Hopper super chip server design combinations include single-, dual-, and quad-configured single-bottom system, and server manufacturers can customize the designs of the above four specific workloads for these systems according to customer needs.

, and NVIDIA is also expanding its NVIDIA certification system program to cover servers with Grace CPUs and Grace Hopper super chips as well as X86 CPUs. The first batch of OEM server certifications are expected to be completed shortly after the partner system is shipped.

AI set off a new round of robotics revolution

Robots are becoming the main new application of AI. According to NVIDIA's release, more than 30 partners around the world will launch the first production systems based on NVIDIA Jetson AGX Orin. More than a dozen Taiwanese camera, sensor and hardware suppliers will launch new products for edge AI, AIoT, robotics and embedded applications.

Jetson Orin has NVIDIA Ampere architecture GPU, Arm Cortex-A78AE CPU, next-generation deep learning and vision accelerator, high-speed interface, faster memory bandwidth and supports multimodal sensors that can deliver data to multiple concurrent AI application pipelines.

The new Jetson AGX Orin production module can bring server-level performance to edge AI. The module will be available in July and the Orin NX module will be available in September. The NVIDIA Jetson AGX Orin developer suite has been launched globally since the GTC conference in March. It provides 275 trillion computing performance per second. With the same pin compatibility as the size, its processing power is 8 times higher than the previous generation NVIDIA AGX Xavier.

written at the end

Against the background of challenging global macro environment, Nvidia's latest quarterly financial report undoubtedly further strengthened their ambitions in the field of accelerated computing. Especially as the game set a quarterly record, data centers finally became Nvidia's largest business. This shows that the utility of deep learning for intelligent automation has driven Nvidia's products to be used in AI computing in data centers on a large scale.

Nvidia is preparing for the next large-scale product launch, and the new GPU, CPU, DPU and robot processors will be launched in the second half of the year. As these new chips and systems enter the market, it will further promote the development of AI, graphics, Omniverse, autonomous driving and robots, and will further drive innovation in related industries.

hotcomm Category Latest News

President Xi Jinping visited Saudi Arabia, Egypt and Iran in the Middle East during his first visit in 2016, and the relations between China and Middle East countries have become the focus of public attention.扩大中东朋友圈、增进与中东国家的能源合作对加深中国与中东关系、互利合作来说至关重要。 Why is it so important? Just - DayDayNews

President Xi Jinping visited Saudi Arabia, Egypt and Iran in the Middle East during his first visit in 2016, and the relations between China and Middle East countries have become the focus of public attention.扩大中东朋友圈、增进与中东国家的能源合作对加深中国与中东关系、互利合作来说至关重要。 Why is it so important? Just

Which oil and gas cooperation projects of the three oil barrels have expanded the circle of friends in the Middle East?