For autonomous driving with deep learning as the main method, training data sets are the most critical, because the algorithms are similar and there are many open source, and the algorithm cannot be distinguished. Deep learning is a black box, so some people joke that deep learni

For autonomous driving with deep learning as the main method, the training data set is the most critical, because the algorithms are similar and there are many open source, and the algorithm cannot be distinguished. Deep learning is a black box, so some people joke that deep learning is an alchemy. Although deep learning is not interpretable, the deep learning dataset is correlated with the final result. The key to distinguishing between high and low is the training dataset, which is like the material for alchemy. The wider the coverage of the training data set, the more detailed the labeling, the more accurate the classification and the more types it is, the better the final autonomous driving performance.

Image source: Internet

Because the training data set is very important, most car companies work around the training data set. The data set is the source power. The above picture is the autonomous driving development workflow of Amazon Web Services Company (AWS). The key link is the collection and processing of data. Large car companies have their own separate training data sets, and their hardware investment is mainly expensive data centers, which may cost hundreds of millions of dollars. At the same time, the maintenance and operation costs of this data center are also not low.

Image source: Internet

If you don’t want to build your own data center, companies such as Amazon , Microsoft and Dell also provide cloud data center services. The above picture shows Amazon’s autonomous driving dataset cloud solution.

The world's first training data set KITTI was jointly developed by Karlsruhe Institute of Technology in Germany and Toyota Institute of Technology. It is currently recognized worldwide as the most authoritative test data set in the field of autonomous driving and the earliest. This data set is used to evaluate the performance of computer vision technologies such as stereo images, optical flow, visual ranging, 3D object detection and 3D tracking in on-board environments. KITTI contains real image data collected from scenes such as urban areas, rural areas and highways. Each image has up to 15 vehicles and 30 pedestrians, as well as various degrees of occlusion and truncation. The entire data set consists of 389 pairs of stereoscopic images and optical flow diagrams, 39.2km visual ranging sequences, and images of 3D labeled objects over 200k, and are sampled and synchronized at a frequency of 10Hz.

Overall, the original data set is classified into Road, City, Residential, Campus and Person. For 3D object detection, label is subdivided into car, van, truck, pedestrian, pedestrian (sitting), cyclist, tram and miscellaneous. The baseline of the binocular camera of the acquisition car is 54 cm long, the on-board computer is Intel Xeon X5650 CPU and RAID 5 4TB hard drive. The collection time was at the end of September and early October 2011, with a total of about 5 days.

Many autonomous driving companies include Argoverse of Volkswagen and Ford joint venture Argo, Open of Waymo, ApolloScape of Baidu , A2D2 of Audi , Mercedes-Benz (Cityscape), Nvidia (PilotNet), Honda (H3D), Ambofor (nuScense), Lyft's L5 and Uber have all disclosed some of their training verification data sets, and some well-known universities have also disclosed their training verification data sets, including MIT, Cambridge, Oxford, Barkley, Caltech, CMU, University of Sydney, Michigan, Germany's Ruhr (traffic light), York, Canada (JAAD), Stanford, and Xi'an Jiaotong University. The most influential of these are Kitti, Waymo's Open and Ambofo's nuScense.

These data sets should be strictly called Benchmark. They are not really used for commercial purposes for enterprise self-use training data sets. Those data sets are the most core assets of the enterprise and will not be disclosed. These benchmarks are usually divided into three parts, about 70% are used for training, 20% are used for testing, and 10% are used for verification. The training data is like a textbook, and testing is like a final exam, and verification is similar to self-conception. Although it is not a training data set for enterprises to use, the main difference is the scale. The scale for enterprises to use is much larger. The reason why enterprises disclose these data sets and allow third parties to use them is mainly to find higher-performance deep learning models, and the other is to improve or correct the training data set for enterprises to use.

What I want to introduce today is Huawei 's ONCE, that is, ONCE (One millioN sCenEs). paper address: arXiv:2106.11037v3, dataset address: http://www.once-for-auto-driving.com or https://github.com/once-for-auto-driving/. ONCE is jointly created by Hong Kong Chinese University , Huawei Noah’s Ark Laboratory, Huawei Smart Car Solutions Division Vehicle Cloud Service Department, Sunshan University and Swiss Institute of Technology.

Image source: Internet

ONCE has the most scenes, up to 1 million, driving time up to 144 hours, covering 210 square kilometers, 7 million synchronous images, and 3D Box has 417k. However, Waymo's 3D Box has as many as 12 million, which is indeed unique. The reason why ONCE is so small is because Huawei believes that training of non-label data is also valuable. Another reason is that the more labels, the higher the cost. Here we need to say a few more words about the annotation and tags:

  • annotation is Annotations, that is, the sample is provided with the Ground Truth annotation, which can only be done by manual and lidar /stereo-binary, because the distance measurement of the lidar and stereo binocular is based on physical measurement rather than probability algorithm speculation, so the truth value can be calculated. The 3-dimensional coordinate information of the lidar is also based on physical measurement, rather than probability algorithm speculation, which can be calculated as Ground Truth. If the machine can achieve the prediction level of the true value, there is no need to train it anymore.
  • tag is Label, sometimes mixed with annotation. The tag may be one of the contents of the annotation. For example, a certain picture is a cat and the tag is a cat, but the Bounding Box does not, so it cannot be considered true value.

Many places mention machine labels or automatic labeling. In fact, it is just an annotation tool like Labelme, and it also requires manual labeling. It is said that the price of labeling an image is 80 cents or one yuan. Because it is marked manually, errors will inevitably occur. Therefore, it is usually marked multiple times, that is, multiple people mark the same sample to minimize errors. Typical examples are , the Boreas dataset of the University of Toronto , the 128-line lidar 5Hz frequency, and the 7111 frame point cloud image will have 326,180 3D standard Boxes.

Image source: Internet

ONCE uses the labeling software developed by Huawei itself, and the interface is as shown in the picture above.

Typical ONCE 3D annotation. Image source: Internet

Image source: Internet

The picture above is the readme.txt file of the Kitti dataset annotation file. The file is stored in the object development kit (1 MB) file. Readme introduces the sample capacity , number of label categories, file organization format, label format, evaluation method and other contents of the sub-data set. It can be seen from this that IMU mainly aims to ensure the consistency of the data timestamp and establish a unified coordinate system, including all coordinate systems and local coordinate systems. ONCE should also adopt a similar layout, with TXT file storage annotations separately.

Image source: Internet

ONCE dataset, including RGB images synchronized with lidar, including various weather.

main sensor comparison. Image source: Internet

Any autonomous driving training data acquisition vehicle is a must-have lidar. Usually, lidar appears as the true value of the distance Ground Truth. At the same time, the 3D coordinates and 3D size of the 3D Box are also inseparable from lidar. In addition to the complex sensor and positioning system configuration, the acquisition vehicle also has an expensive data acquisition and processing system. The equipment of a acquisition vehicle is generally above 1 million yuan. The data uploaded every day may be TB. Taking Tesla as an example, don’t talk about how Tesla does not have a 5G module to upload massive data , nor do you talk about the privacy issues of license plates. A single lidar can reject it. Tesla’s shadow mode is purely nonsense. The ONCE data set specifically states that the data collection is within the allowable range and actively deletes any personal information and positioning information, especially license plates and faces.In China, any data collection for commercial purposes must be approved by the state and the data is not allowed to be sent abroad.

Image source: Internet

Huawei data acquisition vehicle sensor configuration, inferred from the output points, it seems that it should be Velodyne's 128-line lidar, adopting a three-echo mode, from the ranging range and accuracy, it seems to be Hesai's Pandar 128-line, with a double echo of 6.91 million points. Huawei's ONCE has the highest point density of lidar points in all autonomous driving training data sets, but Huawei's camera resolution is not high.

Image source: Internet

Weather coverage dimensions of several data sets, Waymo mainly collects data in Phoenix City . Phoenix City is located on the edge of the desert and does not rain almost all year round, so Waymo's data sets are sunny. Phoenix City has another feature. Although it has a population of 1.56 million, it only has more than 20 high-rise buildings, and most people live in villas. Therefore, it is very convenient to do autonomous driving. There is no GPS blocking of high-rise buildings, nor does it have large areas of shadows to block the sun. By the way, Intel is headquartered in Phoenix. The BDD 100K does a good job, it is the result of the cooperation between UC Barkley University and Connor University. The disadvantage is that it is very small, with only 100k scenes. It is the only data set covering snow scenes. Those who are interested in BDD 100k can search for BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. The time periods of ONCE are divided, with the most in the morning, followed by afternoon and night.

Image source: Internet

dataset average accuracy comparison. The quality of each model is judged by evaluating its performance on a certain dataset. This dataset is usually called the "verification/test" dataset. This performance is measured by different statistics, including accuracy, precision, recall, etc. The most commonly used metric in object detection problems - Mean Average Precision (mAP). To understand mAP, we must first understand IoU. The metric for the correctness of a given bounding box is "Intersection over Union," which is a very simple visual. Now for each category, the overlapping portion of the predicted bounding box and the reference bounding box is called intersection, and all regions spanned by the two bounding boxes are called union. We now want to distinguish whether the detection result is correct. The most commonly used threshold is 0.5: If IoU0.5, then it is considered a correct detection, otherwise it is considered an error detection. Suppose there are 20 categories in the entire dataset. For each category, we do the same: calculate IoU-precision-average precision. So we have 20 different average accuracy values. With these average accuracy values, it is easy to judge the performance of our model for any given category.

ONCE's biggest feature is that it corresponds to self-supervised learning, semi-supervised learning, and unsupervised Domain Adaptation. ONCE's 3D box annotation is only 417k, which is far lower than Waymo. This is because the annotation cost is high and it is usually manually marked. Although Tesla has automatic marking of machines, the accuracy is too low. The mainstream is still manual marking. Therefore, there is also a joke, that is, as much intelligence as you have, it requires as much manual. It refers to the so-called artificial intelligence specifically relies on manual marking. Huawei's marking took more than 3 months.

We know that general machine learning is divided into supervised learning, unsupervised learning and reinforcement learning.

  • The field of autonomous driving is basically supervised learning , that is, the known data and its one-to-one corresponding labels (labels), that is, the training data set needs to be fully labeled.
  • Unsupervised learning: The process of knowing that the data does not have any annotations. According to certain preferences, an intelligent algorithm is trained to map all data to multiple different tags.
  • reinforcement learning (reinforcement learning): The process of intelligent algorithms improving task performance through continuous trial and error without human guidance.
  • weakly supervised learning: The process of knowing the weak labels corresponding to the data and its one-to-one, training an intelligent algorithm to map the input data to a stronger set of tags. The strength of a tag refers to the amount of information contained in the tag. For example, compared with the divided tag, the classified tag is a weak tag.
  • semi-supervised learning (semi supervised learning): The process of mapping the input data to the tags one by one and some data, and the tags of some data are unknown. Training an intelligent algorithm to learn the data of known tags and unknown tags, and mapping the input data to the tags. Semi-supervision is usually very difficult to mark data. For example, the results of X-rays in a hospital are also required by doctors to determine whether they are healthy or not. There may be only a few sets of data to know whether they are healthy or not.

Image source: Internet

Huawei ONCE provides 6 3D detection models Benchmark, except PointPainting, they are all based on lidar. SECOND in the picture is the Sparsely Embedded Convolutional Detection, which is actually an upgraded version of the VoxelNet (2017) paper. PV-RCNN ranked first in Kitti 3D detection in 2021, with the same author as pointRCNN. PV-RCNN combines the advantages of point-based and voxel-based methods, improving the performance of 3D object detection. The operation based on voxel can efficiently encode multi-scale feature representations and generate high-quality 3D proposal boxes. The point operation has a variable receptive field (Field), so more precise position information can be retained. PointPillars has the highest efficiency and is a commonly used 3D detection algorithm for mass-produced cars.

Image source: Comparison of performance of Internet

models on ONCE. CenterPoints performs the best, but consumes the most computing resources. The CenterPoints model uses Nvidia 130TensorFLOPS's Titan RTX on Waymo's dataset and has only 11 frames of .

self-supervised learning results. Image source: Internet

Semi-supervised learning results are obviously better than self-supervised learning. Image source: Internet

It is not difficult to see from this that the direction of Huawei’s autonomous driving is semi-supervised learning to unsupervised learning. The next article introduces Waymo's Open dataset and Open Motion dataset.

Statement: This article only represents the author’s personal views.

More Zosi reports

report ordering and cooperation consultation please send a private message to the editor.

For autonomous driving with deep learning as the main method, the training data set is the most critical, because the algorithms are similar and there are many open source, and the algorithm cannot be distinguished. Deep learning is a black box, so some people joke that deep learning is an alchemy. Although deep learning is not interpretable, the deep learning dataset is correlated with the final result. The key to distinguishing between high and low is the training dataset, which is like the material for alchemy. The wider the coverage of the training data set, the more detailed the labeling, the more accurate the classification and the more types it is, the better the final autonomous driving performance.

Image source: Internet

Because the training data set is very important, most car companies work around the training data set. The data set is the source power. The above picture is the autonomous driving development workflow of Amazon Web Services Company (AWS). The key link is the collection and processing of data. Large car companies have their own separate training data sets, and their hardware investment is mainly expensive data centers, which may cost hundreds of millions of dollars. At the same time, the maintenance and operation costs of this data center are also not low.

Image source: Internet

If you don’t want to build your own data center, companies such as Amazon , Microsoft and Dell also provide cloud data center services. The above picture shows Amazon’s autonomous driving dataset cloud solution.

The world's first training data set KITTI was jointly developed by Karlsruhe Institute of Technology in Germany and Toyota Institute of Technology. It is currently recognized worldwide as the most authoritative test data set in the field of autonomous driving and the earliest. This data set is used to evaluate the performance of computer vision technologies such as stereo images, optical flow, visual ranging, 3D object detection and 3D tracking in on-board environments. KITTI contains real image data collected from scenes such as urban areas, rural areas and highways. Each image has up to 15 vehicles and 30 pedestrians, as well as various degrees of occlusion and truncation. The entire data set consists of 389 pairs of stereoscopic images and optical flow diagrams, 39.2km visual ranging sequences, and images of 3D labeled objects over 200k, and are sampled and synchronized at a frequency of 10Hz.

Overall, the original data set is classified into Road, City, Residential, Campus and Person. For 3D object detection, label is subdivided into car, van, truck, pedestrian, pedestrian (sitting), cyclist, tram and miscellaneous. The baseline of the binocular camera of the acquisition car is 54 cm long, the on-board computer is Intel Xeon X5650 CPU and RAID 5 4TB hard drive. The collection time was at the end of September and early October 2011, with a total of about 5 days.

Many autonomous driving companies include Argoverse of Volkswagen and Ford joint venture Argo, Open of Waymo, ApolloScape of Baidu , A2D2 of Audi , Mercedes-Benz (Cityscape), Nvidia (PilotNet), Honda (H3D), Ambofor (nuScense), Lyft's L5 and Uber have all disclosed some of their training verification data sets, and some well-known universities have also disclosed their training verification data sets, including MIT, Cambridge, Oxford, Barkley, Caltech, CMU, University of Sydney, Michigan, Germany's Ruhr (traffic light), York, Canada (JAAD), Stanford, and Xi'an Jiaotong University. The most influential of these are Kitti, Waymo's Open and Ambofo's nuScense.

These data sets should be strictly called Benchmark. They are not really used for commercial purposes for enterprise self-use training data sets. Those data sets are the most core assets of the enterprise and will not be disclosed. These benchmarks are usually divided into three parts, about 70% are used for training, 20% are used for testing, and 10% are used for verification. The training data is like a textbook, and testing is like a final exam, and verification is similar to self-conception. Although it is not a training data set for enterprises to use, the main difference is the scale. The scale for enterprises to use is much larger. The reason why enterprises disclose these data sets and allow third parties to use them is mainly to find higher-performance deep learning models, and the other is to improve or correct the training data set for enterprises to use.

What I want to introduce today is Huawei 's ONCE, that is, ONCE (One millioN sCenEs). paper address: arXiv:2106.11037v3, dataset address: http://www.once-for-auto-driving.com or https://github.com/once-for-auto-driving/. ONCE is jointly created by Hong Kong Chinese University , Huawei Noah’s Ark Laboratory, Huawei Smart Car Solutions Division Vehicle Cloud Service Department, Sunshan University and Swiss Institute of Technology.

Image source: Internet

ONCE has the most scenes, up to 1 million, driving time up to 144 hours, covering 210 square kilometers, 7 million synchronous images, and 3D Box has 417k. However, Waymo's 3D Box has as many as 12 million, which is indeed unique. The reason why ONCE is so small is because Huawei believes that training of non-label data is also valuable. Another reason is that the more labels, the higher the cost. Here we need to say a few more words about the annotation and tags:

  • annotation is Annotations, that is, the sample is provided with the Ground Truth annotation, which can only be done by manual and lidar /stereo-binary, because the distance measurement of the lidar and stereo binocular is based on physical measurement rather than probability algorithm speculation, so the truth value can be calculated. The 3-dimensional coordinate information of the lidar is also based on physical measurement, rather than probability algorithm speculation, which can be calculated as Ground Truth. If the machine can achieve the prediction level of the true value, there is no need to train it anymore.
  • tag is Label, sometimes mixed with annotation. The tag may be one of the contents of the annotation. For example, a certain picture is a cat and the tag is a cat, but the Bounding Box does not, so it cannot be considered true value.

Many places mention machine labels or automatic labeling. In fact, it is just an annotation tool like Labelme, and it also requires manual labeling. It is said that the price of labeling an image is 80 cents or one yuan. Because it is marked manually, errors will inevitably occur. Therefore, it is usually marked multiple times, that is, multiple people mark the same sample to minimize errors. Typical examples are , the Boreas dataset of the University of Toronto , the 128-line lidar 5Hz frequency, and the 7111 frame point cloud image will have 326,180 3D standard Boxes.

Image source: Internet

ONCE uses the labeling software developed by Huawei itself, and the interface is as shown in the picture above.

Typical ONCE 3D annotation. Image source: Internet

Image source: Internet

The picture above is the readme.txt file of the Kitti dataset annotation file. The file is stored in the object development kit (1 MB) file. Readme introduces the sample capacity , number of label categories, file organization format, label format, evaluation method and other contents of the sub-data set. It can be seen from this that IMU mainly aims to ensure the consistency of the data timestamp and establish a unified coordinate system, including all coordinate systems and local coordinate systems. ONCE should also adopt a similar layout, with TXT file storage annotations separately.

Image source: Internet

ONCE dataset, including RGB images synchronized with lidar, including various weather.

main sensor comparison. Image source: Internet

Any autonomous driving training data acquisition vehicle is a must-have lidar. Usually, lidar appears as the true value of the distance Ground Truth. At the same time, the 3D coordinates and 3D size of the 3D Box are also inseparable from lidar. In addition to the complex sensor and positioning system configuration, the acquisition vehicle also has an expensive data acquisition and processing system. The equipment of a acquisition vehicle is generally above 1 million yuan. The data uploaded every day may be TB. Taking Tesla as an example, don’t talk about how Tesla does not have a 5G module to upload massive data , nor do you talk about the privacy issues of license plates. A single lidar can reject it. Tesla’s shadow mode is purely nonsense. The ONCE data set specifically states that the data collection is within the allowable range and actively deletes any personal information and positioning information, especially license plates and faces.In China, any data collection for commercial purposes must be approved by the state and the data is not allowed to be sent abroad.

Image source: Internet

Huawei data acquisition vehicle sensor configuration, inferred from the output points, it seems that it should be Velodyne's 128-line lidar, adopting a three-echo mode, from the ranging range and accuracy, it seems to be Hesai's Pandar 128-line, with a double echo of 6.91 million points. Huawei's ONCE has the highest point density of lidar points in all autonomous driving training data sets, but Huawei's camera resolution is not high.

Image source: Internet

Weather coverage dimensions of several data sets, Waymo mainly collects data in Phoenix City . Phoenix City is located on the edge of the desert and does not rain almost all year round, so Waymo's data sets are sunny. Phoenix City has another feature. Although it has a population of 1.56 million, it only has more than 20 high-rise buildings, and most people live in villas. Therefore, it is very convenient to do autonomous driving. There is no GPS blocking of high-rise buildings, nor does it have large areas of shadows to block the sun. By the way, Intel is headquartered in Phoenix. The BDD 100K does a good job, it is the result of the cooperation between UC Barkley University and Connor University. The disadvantage is that it is very small, with only 100k scenes. It is the only data set covering snow scenes. Those who are interested in BDD 100k can search for BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. The time periods of ONCE are divided, with the most in the morning, followed by afternoon and night.

Image source: Internet

dataset average accuracy comparison. The quality of each model is judged by evaluating its performance on a certain dataset. This dataset is usually called the "verification/test" dataset. This performance is measured by different statistics, including accuracy, precision, recall, etc. The most commonly used metric in object detection problems - Mean Average Precision (mAP). To understand mAP, we must first understand IoU. The metric for the correctness of a given bounding box is "Intersection over Union," which is a very simple visual. Now for each category, the overlapping portion of the predicted bounding box and the reference bounding box is called intersection, and all regions spanned by the two bounding boxes are called union. We now want to distinguish whether the detection result is correct. The most commonly used threshold is 0.5: If IoU0.5, then it is considered a correct detection, otherwise it is considered an error detection. Suppose there are 20 categories in the entire dataset. For each category, we do the same: calculate IoU-precision-average precision. So we have 20 different average accuracy values. With these average accuracy values, it is easy to judge the performance of our model for any given category.

ONCE's biggest feature is that it corresponds to self-supervised learning, semi-supervised learning, and unsupervised Domain Adaptation. ONCE's 3D box annotation is only 417k, which is far lower than Waymo. This is because the annotation cost is high and it is usually manually marked. Although Tesla has automatic marking of machines, the accuracy is too low. The mainstream is still manual marking. Therefore, there is also a joke, that is, as much intelligence as you have, it requires as much manual. It refers to the so-called artificial intelligence specifically relies on manual marking. Huawei's marking took more than 3 months.

We know that general machine learning is divided into supervised learning, unsupervised learning and reinforcement learning.

  • The field of autonomous driving is basically supervised learning , that is, the known data and its one-to-one corresponding labels (labels), that is, the training data set needs to be fully labeled.
  • Unsupervised learning: The process of knowing that the data does not have any annotations. According to certain preferences, an intelligent algorithm is trained to map all data to multiple different tags.
  • reinforcement learning (reinforcement learning): The process of intelligent algorithms improving task performance through continuous trial and error without human guidance.
  • weakly supervised learning: The process of knowing the weak labels corresponding to the data and its one-to-one, training an intelligent algorithm to map the input data to a stronger set of tags. The strength of a tag refers to the amount of information contained in the tag. For example, compared with the divided tag, the classified tag is a weak tag.
  • semi-supervised learning (semi supervised learning): The process of mapping the input data to the tags one by one and some data, and the tags of some data are unknown. Training an intelligent algorithm to learn the data of known tags and unknown tags, and mapping the input data to the tags. Semi-supervision is usually very difficult to mark data. For example, the results of X-rays in a hospital are also required by doctors to determine whether they are healthy or not. There may be only a few sets of data to know whether they are healthy or not.

Image source: Internet

Huawei ONCE provides 6 3D detection models Benchmark, except PointPainting, they are all based on lidar. SECOND in the picture is the Sparsely Embedded Convolutional Detection, which is actually an upgraded version of the VoxelNet (2017) paper. PV-RCNN ranked first in Kitti 3D detection in 2021, with the same author as pointRCNN. PV-RCNN combines the advantages of point-based and voxel-based methods, improving the performance of 3D object detection. The operation based on voxel can efficiently encode multi-scale feature representations and generate high-quality 3D proposal boxes. The point operation has a variable receptive field (Field), so more precise position information can be retained. PointPillars has the highest efficiency and is a commonly used 3D detection algorithm for mass-produced cars.

Image source: Comparison of performance of Internet

models on ONCE. CenterPoints performs the best, but consumes the most computing resources. The CenterPoints model uses Nvidia 130TensorFLOPS's Titan RTX on Waymo's dataset and has only 11 frames of .

self-supervised learning results. Image source: Internet

Semi-supervised learning results are obviously better than self-supervised learning. Image source: Internet

It is not difficult to see from this that the direction of Huawei’s autonomous driving is semi-supervised learning to unsupervised learning. The next article introduces Waymo's Open dataset and Open Motion dataset.

Statement: This article only represents the author’s personal views.

More Zosi reports

report ordering and cooperation consultation please send a private message to the editor.

ZoS 2022 research report writing plan

Intelligent connected car industrial chain panoramic map (September 2022 edition)

Ambient market research (joint venture)

html l32

self brand host factory autonomous driving html ml2

Automotive Vision (domestic)

High-precision map

Joint venture brand OEM factory autonomous driving

Automotive Vision (foreign) html l2

high precision positioning

ADAS and autonomous driving Tier1-domestic

Ambient market research (local article)

hm l3 Automobile Gateway

ADAS and autonomous driving Tier1-Foreign

data closed-loop research

automatic driving and cockpit domain controller

infrared night vision

car information security hardware

multi-domain computing and area controller

car simulation (Part 1)

car information security software

passenger car chassis domain control

car simulation (Part 2)

OEM information security

Domain controller ranking analysis

LiDAR-Domestic

LiDAR-Domestic

Wireless communication module

Wireless communication module

E/E architecture

LiDAR-Foreign article

Automobile 5G fusion

L4 Automotive driving

mm wave radar

800V high voltage platform

L2 Autonomous driving

L2 Autonomous driving

Ultrasonic radar for automotive

fuel cell

Passenger car camera quarterly report

Radar disassembly

integrated battery

ADAS data annual report

Laser and millimeter wave radar ranking

Integrated die-casting

Joint venture brand car network

Special vehicle autonomous driving

Automotive driving

Self-brand car network

Mining automatic driving Drive

line control chassis

automatic driving heavy truck

unmanned shuttle

unmanned shuttle chassis

skateboard chassis

html l34

Commercial Vehicle ADAS

unmanned delivery vehicle

Electronic suspension

Commercial Vehicle Smart Cockpit

Research on unmanned retail vehicles

steering system

commercial vehicle networking

Automatic driving

3

Wireless Control Research

car intelligent cockpit

port autonomous driving

charging infrastructure

Smart Cockpit Tier1

Modular Report

Automobile Motor Controller

Cockpit Multi-screen and Connection

V2X and vehicle-road collaboration

hybrid report

smart cockpit design

road side intelligence Zhi

Automotive PCB research

Instrument and central control display

Road side edge calculation

html ml3IGBTh and SiC research

Intelligent rearview mirror

Automotive eCall system

EV thermal management system

driving recorder

car EDR research

car power electronics

car digital key

Intelligent Automotive Personalization

Electric Drive and Power Domain Research

Automotive UWB Research

Automotive Multimodal Interaction

car wire harness

HUD Industry research

HUD Industry research

car voice

car audio research html ml2

man-computer interaction

TSP manufacturers and products

car seat

car DMShtt ml2

Automatic driving regulations

Automatic lighting

OTA research

Automatic driving standards and certification

4

Automobile Magnesium alloy die casting

Automobile cloud service research

Intelligent network test base

D-install new four-in-one

AUTOSAR Research

PBV and automotive robot

new forces in car manufacturing-NIO

software definition car

fly car

new forces in car manufacturing-Xiaopeng

car functional safety

integrated research on road and parking

Waymo intelligent network layout

passenger car T-Box

Smart parking research

cockpit SOC

Commercial Vehicle T-Box

Car Time-sharing Rental

Automobile VCU Research

T-Box Ranking Analysis

Shared Travel and Autonomous Driving

Automobile MCU Research

Software Supplier

Software Supplier html ml2

automatic driving chip

sensor chip

"Zosi Research Monthly Report"

ADAS/Smart Car Monthly Report | Car cockpit electronic monthly report | Car vision and automotive radar html April report | Battery, motor, electronic control monthly report | vehicle information system html April report | Passenger car ACC data monthly report | Front view data monthly report | HUD monthly report | AEB monthly report | APA data monthly report | LKS data monthly report | Front radar data monthly report