"While Waymo is now providing over 150,000 autonomous rides every week, Tesla still has a long way to go until its controversial “Full Self-Driving” software is ready for the EV maker’s competing robotaxi service.
Just this week, a Tesla driver plowed through a deer without even a hint of slowing down with the $8,000 add-on feature turned on, and another smashed into someone else’s car when its owner employed its Summon feature."…
Even worse, LIDAR isn’t even that expensive. Musky just thought they should be able to do without it because, “humans do it with just eyes.”
Not a fan of Musk at all, but Lidar is quite expensive. A 64 line lidar with 100m+ range was about 30k+ a few years ago (not sure how prices have changed now). The long range lidar on the top of the Waymo car is probably even higher resolution than this. It’s likely that the sensor suite + compute platform on the waymo car costs way more than the actual Jaguar base vehicle itself, though waymo manufactures it’s own lidars. I think it would have been impossible to keep the costs of Teslas within the general public’s reach if they had done that. Of course, deploying a self driving/L2+ solution without this sensor fidelity is also questionable.
I agree that perception models will not be able deal with this well for a while. They are just not good enough at estimating depth information. That being said, a few other companies also attempted “vision-only” solutions. TuSimple (the autonomous trucking company) argued at some point that lidar didn’t offer enough range for their solution since semi trucks need a lot more time to slow down/react to events ahead because of their massive inertia.
Yeah we used to joke that if you wanted to sell a car with high-resolution LiDAR, the LiDAR sensor would cost as much as the car. I think others in this thread are conflating the price of other forms of LiDAR (usually sparse and low resolution, like that on 3D printers) with that of dense, high resolution LiDAR. However, the cost has definitely still come down.
I agree that perception models aren’t great at this task yet. IMO monodepth never produces reliable 3D point clouds, even though the depth maps and metrics look reasonable. MVS does better but is still prone to errors. I do wonder if any companies are considering depth completion with sparse LiDAR instead. The papers I’ve seen on this topic usually produce much more convincing pointclouds.
Even my cheap chinese vacuum cleaner has it.