According to Electrek, Tesla announced on Tuesday that it will no longer use ultrasonic sensor in its autonomous driving sensor suite, and instead use the camera-only "Tesla Vision" system.
Last year, Tesla announced that it would transition to the "Tesla Vision" autonomous driving system without radar and began producing vehicles without front radar.
Initially, the Autopilot sensor suite - Tesla claims to include everything needed to eventually achieve fully autonomous driving capabilities - including eight cameras, a front-facing radar, and several ultrasonic sensors around the vehicle. The transition from
to Tesla Vision means that only computer vision based on camera is used in autonomous driving system , rather than input from both the camera and radar .
You would think more data would be better, but Tesla’s idea is that the road is designed for humans, who use vision-based systems -- the natural neural networks in their brains to navigate. The automaker believes it is best to try to copy purely with cameras and artificial neural network , and not let radar data contaminate the system.
This transformation has led to some autonomous driving capabilities being restricted in vehicles without radar. For example, by May this year, Tesla will limit the autonomous driving speed of vehicles equipped with Tesla Vision to only 75 mph.
Now, Tesla announced that it will take a step further, canceling ultrasonic sensors and replacing them with its Tesla Vision technology:
Today, we are taking the next step in Tesla’s vision to remove ultrasonic sensors (USS) from Model 3 and Model Y. We will continue to launch Model 3 and Model Y globally in the coming months, followed by Model S and Model X in 2023.
ultrasonic sensor is mainly used for short-distance object detection in applications such as automatic parking and collision warning.
Tesla explains how its visual neural network replaces USS:
While canceling USS, we also launched our vision-based occupation network -- currently for fully autonomous driving (FSD) beta -- to replace inputs generated by USS. Through today's software, this approach provides Autopilot with high definition spatial positioning, a further visual range, and the ability to identify and distinguish objects. Like many of Tesla's features, our occupation network will continue to improve rapidly over time.
Tesla confirmed that the shift to a camera-only approach will again lead to some feature limitations:
Over time, Tesla will launch new software updates to improve functionality and use its occupation network to release parking assist systems, automatic parking systems, Summon and Smart Summon.
Electrek's perspective
While this shift may be considered another cost-cutting effort by Tesla, it will now not have to embed ultrasonic sensors on its body panels, the automaker really believes its vision system is a better approach.
Unfortunately, Tesla once again decided to roll out this change before it was ready to replace all features.