Tesla’s Autopilot has been no stranger to news headlines in recent years. On the one hand, Tesla CEO Elon Musk has said fully autonomous self-driving robo-taxis are the future of his company. On the other, the current semi-autonomous Autopilot system found in new Teslas has garnered plenty of attention, via not only reports of lurid crashes, but also investigations into the how and why of the collisions the tech seems like it may be involved in.

So far, the cause, severity, and outcomes of many of these crashes have not been made available to the public. However, thanks to a recent investigation by the Wall Street Journal, we now have much more information detailing how these incidents occurred, as well as some of the outcomes. A word of warning for those clicking through to watch WSJ’s video; some of the imagery is taken directly from the cars involved in the collision, and so it might be disturbing for some viewers.

Essentially, what the WSJ’s investigators did was start with the Autopilot crashes Tesla has been obligated to submit information on to the U.S. National Highway Transportation Safety Administration (NHTSA), some 1,000-plus of them since 2021. The public-facing information, says the Journal, is unfortunately heavily redacted, with Tesla claiming the blacked-out sections — a copious proportion of the documents available, according to the video — are hidden because they’re proprietary. This redaction also makes it difficult, if not impossible, to ascertain the exact circumstance and cause of a collision.

What the Journal did, however, was to gather crash data from individual states — presumably unredacted, though the investigation does not specify this — and use it to piece together information on 222 of those 1,000-plus crashes. According to the investigation, “44 of the crashes occurred when Teslas in Autopilot veered suddenly, and 31 occurred when a Tesla in Autopilot failed to stop or yield for an obstacle in front of it.”

One of those obstacles killed Steven Hendrickson, a 35-year-old father of two. He was in his Tesla being Autopilot-ed in Fontana, California early morning May 5, 2021. According to the Journal’s investigation — and this is where the video gets both detailed and lurid — an overturned semi-truck appeared in front of him, and “moments later, he was killed.”

This photo taken on January 4, 2022 shows a vehicle with Luminar LiDAR based Proactive Safety pre-collision braking crash avoidance technology (right) stopping to avoid a child-sized test dummy as it is demonstrated on a test track, while a Tesla Model Y collides with the test dummy (left) at the Las Vegas Convention Center ahead of the Consumer Electronics Show (CES) in Las Vegas, Nevada
This photo taken on January 4, 2022 shows a vehicle with Luminar LiDAR based Proactive Safety pre-collision braking crash avoidance technology (right) stopping to avoid a child-sized test dummy as it is demonstrated on a test track, while a Tesla Model Y collides with the test dummy (left) at the Las Vegas Convention Center ahead of the Consumer Electronics Show (CES) in Las Vegas, NevadaPhoto by Patrick T. Fallon /Getty

According to the WSJ, Tesla’s reliance on cameras — the approach differs from most of the rest of the semi-autonomous industry, which uses LIDAR (Light Detection and Ranging) technology — “is putting the public at risk.”

Phil Koopman, associate professor of electrical and computer engineering at Carnegie Mellon University, told the Journal reporters that machine learning works by training on a bunch of examples. So, while “a person would have said something was in the middle of the road,” if presented with an overturned trailer, a computer might have never seen these parameters before. In other words, “it might have no idea what’s going on,” since it might not have been “trained on pictures of an overturned double trailer.”

For those with a long memory, this smacks of the crash that killed 45-year-old Joshua Brown in May 2016. In that case, the semi-trailer in question wasn’t overturned, but, since it was pulling into the road, it was similarly perpendicular to his Tesla Model S’ direction of travel. According to the Electrek, Brown’s car “ended up going underneath the trailer, and Autopilot kept driving another significant distance before coming to a stop.” In both cases, Autopilot seems to not have recognized the image of a trailer’s side as a danger.

The WSJ video provides more examples of Tesla’s (self-)driving badly, including running off the road at the end of a T-intersection; and driving into a stopped police car attending to a disabled vehicle. None of this information is new, although the WSJ’s collection of video from the Teslas involved in the actual incidents is compelling.

According to the report, Tesla, for its part, says drivers using Autopilot must be ready to take control at all times, and that Hendrickson, for example, was warned no fewer than 19 times to keep his hands on the wheel. Nonetheless, the Journal says Tesla’s reliance on cameras (with some radar backup) remains a limitation for its semi-autonomous systems. Musk has called the LIDAR sensors that most other semi-autonomous systems use both “expensive” and “unnecessary.”

The problem is that while “people think these cars are learning while they driving, they’re not.” So says Missy Cummings, director of Duke University’s Humans and Autonomy Lab (HAL), who adds that the systems are in fact only learning “at the rate the companies are deciding to retrain them.” Tesla’s cars may indeed be constantly gathering data, but none of that information can be used until the company’s engineers analyze it and decide how and when to update the software. According to Cummings, “computer vision is such a deeply flawed technology, and we’re not doing enough to determine how to fix its gaps.”

Perhaps the Journal’s greatest revelation, however, is that one of the main issues with Autopilot is when the car’s multiple onboard cameras don’t agree as to what they’re seeing. According to John Bernal, a former Tesla employee who worked in the annotation of Autopilot data, the various onboard cameras will sometimes portray different positionings for the same obstacles and, in some cases, not see the same objects: “What looks true in one camera will not be true in another camera.”

Rescue workers proceed with caution around the spot where a Tesla slammed into a tree in Baarn, Netherlands, on September 7, 2016
Rescue workers proceed with caution around the spot where a Tesla slammed into a tree in Baarn, Netherlands, on September 7, 2016Photo by Robin Van Lonkhuijsen /Getty

In one collision for which the WSJ managed to gather the raw data from the car’s computer, one onboard camera could see a crashed pickup in the road ahead, but another could not; as the Tesla got closer, Autopilot simply didn’t recognize the risk the object presented. According to the Journal, the car then “[crashed] at full speed.”

In the end, the issue is how Tesla markets its semi-autonomous driving technology, and whether its hardware-software package is capable of the fully-autonomous self-driving Musk is saying is the company’s future, without upgrading to LIDAR sensors. Motor Mouth has long contended this is a mistake. The Wall Street Journal’s report certainly provides compelling video to back up that assertion.

The sad thing is that, as Hendrickson’s widow Janell says, “People are still going to buy Tesla. They’re still going to support Elon Musk. It doesn’t matter how many accidents there are. But at least understand your car and your car’s capability before you just put your entire life behind that.”

Sign up for our newsletter Blind-Spot Monitor and follow our social channels on X, Tiktok and LinkedIn to stay up to date on the latest automotive news, reviews, car culture, and vehicle shopping advice.