The reason I ask, is I’ve been seeing alot of news and cases of Tesla’s self driving acting up and being a point of contention. But back in 2016-17 my ex’s uncle and aunt got a Model X when they first dropped and they “auto-drove” us like 50 miles without any noticeable issue.
Was i just gambling my life or has the tech somehow gotten worse?
Tesla’s marketing has consistently lied to you about the state of their “full self driving”.
It’s not. It never was. This is a perfect example of “fake it ‘till you make it”
The tech never “evolved” in the first place. It was a deformity from the start. LIDAR was developed for this reason. But Tesla uses cheap ass cameras that try to interpret what it is seeing through the visual data. I’m guessing with my layman’s knowledge this is why they veer at semi trucks. Because the technology itself is based on a shitty premise.
In a non-Tesla car I’ve driven, there was an autopilot cruise control mode that just used cameras. In practice it only works out well if you’re driving long distance on a highway with low traffic. It’s still nice to have (much better than having no autopilot cruise mode) but I don’t trust it around multiple lanes of other cars doing unpredictable shit. Also quits working in the rain when the cameras are obscured.
Sounds like normal cruise control functions with less unexpected errors.
The first Model X has Autopilot 1 which was a system designed by Mobileye. Tesla’s relationship with Mobileye fell apart and they replaced it with an Nvidia based system in 2017(?). It was really really bad at the start as they were essentially starting from scratch. This system also used 8 cameras instead of the original 1.
Then Tesla released AP hardware 3 which was a custom-built silicon chip designed specifically for self-driving which also enabled proper navigation of surface streets in addition to the just highway lanekeeping offered in AP1. This broadened scope of actually dealing with turns and traffic from multiple angles is probably where the reputation of it being dangerous has come from.
My HW3 enabled Model 3 does make mistakes, though it’s rarely anything like hitting a pedestrian or running off the road. Most of my issues are with navigational errors. If the GPS gets messed up in the tunnel, it’ll suddenly decide to take an exit that it isn’t supposed to, or it’ll get in the left lane to pass someone 1/4 mile from a right-exit.
Just a guess, but it’s probably a combination of two things. First, if we say a self driving car is going to hit an edge case it can’t resolve once in every, say, 100,000 miles, the number of Tesla’s and other self driving cars on the roads now means more miles driven more frequently which means those edge cases are going to occur more frequently. Second, people are becoming over reliant on self driving - they are (incorrectly ) trusting it more and paying less attention, meaning less chance of human intervention when those edge cases occur. So probably the self driving is overall better, but the number of accidents overall is increasing.
Was i just gambling my life or has the tech somehow gotten worse?
We can safely assume that tech has evolved to the better during this time.
But it is still too dangerous. It should not be allowed on public roads yet.
That the tech has evolved to be better actually is an assumption. The novel data problem hasn’t been meaningfully addressed really at all so mostly we assume that progress has been made… but it’s not meaningful progress. The promises being made for future capability is mostly pretty stale hype that hasn’t changed year to year with a lot of the targets remaining unchanged. We are getting more data on where specifically and how it’s failing, which is something, but overall it appears to be a plateau of non-linear progress with different updates being sometimes less safe than newer ones.
That actually safe self driving cars might be decades away however is antithetical to the hype run marketing campaigns that are working overtime to put up smoke and mirrors around the issue.
There are also many more Teslas on the road, and the “full self driving” incidents are more widely reported on since the new ownership likes to overpromise and vastly underdeliver. Other commenters have already addressed the tech side, but a few years ago, the Tesla-specific FSD was found to be active right up until a split second before some prolific collisions with emergency vehicles, leading to speculation on liability. Tesla aside, I think it’s just laziness on the part of drivers used to FSD doing the menial tasks of driving.
I appreciate these comments saying the tech hasn’t degraded and it’s been standstill, or that it was never great in the first place, all of which is true but I would like to interject my own Model 3 experience. When we first bought the Tesla in 2019 the self driving functionality on the highway felt safe and functional in nominal conditions. When we sold the Tesla 2 years ago (2022) the self driving felt noticably more finicky. It struggled to switch lanes, recognize when lanes started and ended, and had noticably more issues with maintaining proper speed and distance with other cars.
It probably wasn’t significantly more dangerous, but it felt like it was. What was a feature we used for the first year or two without much complaint turned into something we never used and our driving time when down in that third year not up so it wasn’t exposure time I don’t think.
That car had AP1 which used hardware developed by MobileEye. Tesla doest like licensing software so they ditched that for a homegrown solution.