Self-driving cars are no longer fantastical inventions of the future. They are on the roads and already causing accidents.

There could be as many as 10 million self-driving cars on the road by 2020.

Marketed as a safer alternative to human drivers, self-driving cars still lack the reasoning and contextual knowledge of humans. With self-driving test cars on the roads throughout the country, this poses a huge safety concern. In fact, two deaths are already connected to the Tesla Model S autopilot.

Hold Tesla Accountable

How Far Are We from Owning Self-Driving Cars?

Self-driving cars are already here. California alone has approved 111 car models for testing.

2014-03-04_Geneva_Motor_Show_1186Google, Uber, Tesla, Ford and others are on a “fast track” to mass manufacturing self-driving cars. A BI Intelligence report predicts that there could be as many as 10 million self-driving cars on the road by 2020.

We may be a few years away from mass production, but drivers are interacting with self-driving technology right now. Since October 2014, Tesla Model S owners have served as beta test drivers for its autopilot software.

Companies like Uber and Google are actively testing cars that will eliminate the need for driver intervention altogether. Uber launched their first fleet of self-driving cars in Pittsburgh, and recently purchased Otto, a self-driving truck startup.

Meanwhile, self-driving Google cars can already be seen up and down Silicon Valley’s roads.

Though both models still feature licensed drivers who are ready to intervene if necessary, neither company intends for them to be a permanent requirement.

We Aren’t Ready to Ditch Human Drivers Yet

The technology required for fully autonomous cars is extremely complex—which is why human intervention is still necessary. It isn’t enough for the software to detect objects in front of them; the software must be able to mimic human reasoning and even ethics.

At Stanford, engineers and philosophers are partnering to design algorithms that address these concerns. Some of the decisions they encounter are complex ethical dilemmas: Should a self-driving car prioritize the safety of a child that runs into the street over the safety of its passengers?

When humans are behind the wheel, they simply react to these situations—hitting another car, for example, rather than swerving into a mass of pedestrians. Self-driving cars are being programmed to make these decisions right now though, which in some cases could look more like premeditated homicide than a random reaction.

 

Self-Driving Cars Still Lack Visual Intelligence

In addition to developing “moral compasses” for self-driving cars, the software must become “visually intelligent.”

In May 2016, Tesla’s autopilot feature failed to detect an 18-wheeler due to the truck’s height and a glare. This deadly mistake has since been addressed by Tesla’s September software update. The update includes improved obstacle detection, no longer relying just on the car’s cameras, but also on its onboarding radar technology.

Tesla’s autopilot feature failed to detect an 18-wheeler due to the truck’s height and a glare.

Tesla described the difficulties of detecting obstacles, explaining that the material and angle of an object can affect whether or not something like a soda can is registered as a trivial object in the road or whether it requires the car to slam its brakes.

To improve the “vision” of autonomous cars, researchers at Princeton and Stanford launched ImageNet: a repository of 14 million categorized images. However, image recognition is where this software stands right now. Experts say the software is still far from becoming visually intelligent like humans, complete with reasoning and contextual knowledge.

This is why self-driving cars are generally only operating in specific environments. The Uber cars in Pittsburgh, for example, are limited to a 12-square-mile radius downtown (an area for which Uber has extremely detailed maps).

Tesla’s Model S Involved in Two Deadly Accidents

Though still in test mode, Tesla’s Model S autopilot is linked to two deaths so far. The first accident occurred in China in January 2016, killing driver Gao Yaning. It isn’t clear whether or not autopilot was enabled at the time of the crash. However, a lawsuit was filed against Tesla on behalf of Gao Yaning’s father.

Four months after that accident, Joshua Brown’s Model S crashed into an 18-wheeler while the autopilot feature was engaged. Tesla stated that the car didn’t detect the truck because of its height and a glare from the bright sky. This is the first confirmed death caused by the Model S autopilot.

Fight Back

Google Argues That Self-Driving Cars Are Safer Than Humans

Photographer: David Paul Morris/Bloomberg
Photographer: David Paul Morris/Bloomberg

Accidents aren’t limited to Tesla’s Model S, though. A self-driving Google car caused a crash in Mountain View in February 2016 when it hit a bus. The test driver of the vehicle failed to intervene, assuming that the bus driver was going to stop.

Another Google car was involved in a crash in September 2016, though it was the other party’s fault. A vehicle ran a red light in Mountain View, hitting the Google car in an intersection.

Google uses these incidents to argue that automobile accidents are caused by human error rather than manufacturing errors. This is why Google’s self-driving cars will not feature steering wheels or pedals, removing the option for passengers to intervene. According to the company, this will allow for the safest experience, as 94% of automobile accidents are caused by human error.

In the Future, Hackers May Cause Car Accidents 

Developers of self-driving cars largely market the products based on their ability to drastically reduce accidents. However, this doesn’t mean that automobile accidents will be a thing of the past. Instead, hackers may cause crashes in the future.

Tesla has already experienced this firsthand. In September 2016, Chinese security researchers uncovered vulnerabilities in their security systems which allowed them to unlock car doors, open sunroofs, and reposition seats. Tesla resolved the issue ten days later with a security update.

Though seemingly harmless now, it’s easy to imagine hackers causing cars to drive off the road or crash into other vehicles.

Contact Us

Classaction.com attorneys have extensive experience with automotive litigation, including lawsuits over Takata airbags, GM ignition switches, and Volkswagen emissions fraud.

If you were in an accident involving a self-driving car, we want to hear from you. Contact us for a free, no-obligation case review.