Arm continues to expand its dedicated chip product line and bet on machine learning. According to the Heart of Machines on October 23, Arm held an annual technical seminar in Beijing and launched three new processor IP designs, including Ethos-N57/Ethos-N37 NPUs, Mail-G57 GPUs, a

Arm Continuously expanding the dedicated chip product line and bet on machine learning .

10/23 Machine Heart News, Arm held an annual technical seminar in Beijing and launched three new processor IP designs, including Ethos-N57/Ethos-N37 NPUs, Mail-G57 GPUs, and Mali-D37 DPUs with the smallest area.

three IPs adopt the Arm AI platform and the latest machine learning technology, for the graphics processing and display needs of digital TVs, as well as mainstream and entry-level mobile device processing.

In the past year, Arm has launched several new solutions from network endpoints to the cloud, including Arm Project Trillium, Arm Neoverse, two new Automotive Enhanced automotive hardener processors with security capabilities, and the Pelion IoT platform designed to securely manage IoT devices. The

Ethos series is an NPU that is specifically designed for AI computing performance improvements. The Ethos-N77 will serve as a dedicated processor for machine learning to ensure the key performance of AI applications; the Ethos-N57 and Ethos-N37 are positioned as inference processors, for visual, voice, etc., suitable for home and mobile scenarios. In addition, Arm has also released supporting arm NN development software.

ARM Mail-G57 GPU is mainly aimed at the high-efficiency computing needs and complex machine learning in games. It has improved energy efficiency by 30%, performance by 30%, and machine learning performance by 60%.

Mali-D37 DPU is defined by Arm as the DPU with the smallest area. It adopts the Komeda architecture for the first time. It can provide 2K and full HD videos with an area less than 1mm², and the system's power consumption and memory management requirements have been reduced by 30%.

In May this year, Arm has launched its flagship IP solution to define the performance of high-end smartphones in 2020 and provide a new generation of artificial intelligence experience, including:

1) Cortex-A77 CPU for the mobile market;

2) Mail-G77 GPU for the high-performance high-definition gaming;

3) Mail-D77 for the high-definition display effect;

4) Arm ML processor.

Meanwhile, Arm is also promoting early adopters of the latest advanced mobile processors, including Arm Cortex-A77 CPUs and Mali-G77 GPU, through strengthening its collaboration with Synopsys.

Synopsys solution supports optimized design of smartphones, laptops, other mobile devices, 5G, augmented reality (AR) and machine learning (ML) products using Arm's latest processors, including Synopsys Fusion Design Platform, Verification Continuum platform and DesignWare interface IP.

Arm Cortex-A77 CPU, with 20% higher IPC performance improvements than Cortex-A76 devices, bringing advanced ML and AR/VR experiences.

Mali-G77 GPU meets this challenge with the new Valhall architecture, with a performance improvement of nearly 40% compared to the previous generation Mali-G76 GPU used in current devices. Mali-G77 is also strengthened on key microarchitectures, including engines, texture pipes and load store caches, and improves power consumption efficiency and performance density by 30% respectively. The

Arm ML processor corresponds to Project Trillium, which is a heterogeneous ML computing platform, including an Arm ML processor and an open source Arm NN software framework. It is currently equipped on more than 250 million Android devices. As machine learning use cases are becoming increasingly demanding, developers are also more eager for exclusive neural processors (NPUs).

Since the announcement of Project Trillium last year, Arm has strengthened ML processors, including more than twice the power efficiency, reaching 5 TOPs/W, three times the memory compression technology, and improving the next-generation performance to up to eight cores and up to 32 TOPs/s.

1 ARM V8 is not restricted by list

For the restrictions on Huawei for the US entity list. Rene Haas, president of the IP product business group in charge of chip licensing in Arm, had made it clear that Huawei and HiSilicon are long-term partners of Arm, and subsequent chip architectures can be licensed to Huawei HiSilicon.

At present, the United States has not removed Huawei from its entity list, but technology companies around the world have announced that they will continue to cooperate with Huawei.

At today's event, ARM China CEO Wu Xiongang added that through legal affairs and related adjustments, ARM V8 and subsequent architecture will continue to support Chinese partners without restrictions in compliance.

Arm released the V8 architecture in 2011, and based on this architecture, it released chip cores such as Cortex A76 and Cortex-A77. Chip design companies such as Huawei then designed the final mobile phone chip based on these chip cores.

According to the latest data, Arm has more than 200 partners in China, and more than 16 billion chips based on ARM architecture shipped by its Chinese partners, and 95% of domestic SoC chips are based on ARM architecture. Wu Xiong'ang emphasized that ARM is the only mainstream computing architecture that does not originate from the United States.

2 The era of Total Compute is here!

At this year’s International Computer Show (COMPUTEX 2019), Jem Davies, Academician, Vice President and General Manager of Machine Learning Division, explained Arm’s views and strategies on the development of the ML market, emphasizing that Arm is the only supplier in the market with a broad portfolio of CPU, GPU and NPU products, as well as strong ecosystem support.

By adopting Total Compute, Arm will be able to provide the best integrated solutions to meet today’s challenges and realize the huge potential of ML applications.

At this month’s Arm TechCon 2019 event, Arm announced a partnership with Unity to ensure 3D applications run smoothly on hardware using the Arm architecture, and as part of a comprehensive computing (Total Compute) collaboration approach, developers can easily access cores besides the CPU core.

Arm believes that Arm Total Compute represents a new approach to IP design, focusing on use case-driven optimization system solutions.

Based on this solution, developers will write their software with the help of a software development kit, which can find out the best way to handle the software on CPU, GPU, or machine learning (ML) hardware, and learn how to optimize for optimal rendering and performance. It depends on the best way to handle the software within a given power range, said Paul Williamson, vice president of Arm’s account group.

As part of Total Compute, Arm and Unity Technologies are expanding their strategic partnerships to further improve performance.

Foreseeable, whether it is for VR headsets or wearables, smartphones or DTVs, Total Compute will play an important role, adopting a comprehensive computing approach that simplifies security, improves performance and efficiency, and provides developers with more performance access opportunities throughout the Arm ecosystem, ultimately enabling a truly digital immersive experience.

3 The next generation bet on machine learning

ARM also announced a new generation of Matterhorn CPUs in the future, which is the code name for the next generation of Cortex-A kernel. Simon Segars, CEO of

Arm, once introduced that Matrix Multiply (MatMul) universal matrix multiplication is added to Matterhorn, thus double its machine learning performance compared to Cortex-A77.

Arm will also add new security measures throughout the CPU core and cache. These security extensions will be able to control pointer authorization and provide branch target identifier and memory tag extensions. Arm plans to provide another Platform Security Architecture (PSA) EL2 that complies with these new features.

Since Arm launched Cortex-A73, Arm has gradually improved the performance of machine learning (ML), hoping to significantly expand the use of machine learning in CPUs.

4 Expand edge computing power, emphasize security needs

Arm releases Cassini plan to ensure the cloud-native computing experience through a diversified and secure edge ecosystem, including open platform standards and reference system design, as well as a cloud ecosystem software stack, PSA architecture for edge security infrastructure design.

Arm believes that the key to the successful deployment of applications leveraging the AI ​​edge is to provide a diverse solution that can cover various power consumption and performance needs. A single vendor solution cannot meet all needs.In addition to becoming AI-centric, the AI ​​edge must be cloud-native, virtualized (VM or containers) while supporting multiple users. Most importantly, it must be safe.

The solutions that currently form the edge of infrastructure come from an extremely diverse ecosystem that is also rapidly changing to meet these newly generated needs.

In order to cope with the changes at the edge of AI, Arm announced the launch of Project Cassini: an industry proposal focused on ensuring cloud-native experiences in a diverse and secure edge ecosystem.

Project Cassini will focus on the infrastructure edge, develop platform standards and reference systems, and based on them, seamlessly deploy cloud-native software stacks within the standardized platform security architecture (PSA) framework that has been extended to the infrastructure edge.

Two years ago, Arm launched PSA, allowing companies to design security features based on a common set of requirements to reduce the costs, time and risks related to creating product-grade IoT security. Project Cassini now extends PSA to the edge of infrastructure with the goal of standardizing all the most basic security needs.