Self-driving cars are no longer fantastical inventions of the future. They are on the roads and already causing accidents.

There could be as many as 10 million self-driving cars on the road by 2020.

Marketed as a safer alternative to human drivers, self-driving cars still lack the reasoning and contextual knowledge of humans. With self-driving test cars on the roads throughout the country, this poses a huge safety concern. In fact, two deaths are already connected to the Tesla Model S autopilot.

If you or a loved one were in an accident involving a self-driving car, contact our attorneys for a free, no-obligation legal review. You may be eligible for a lawsuit.

File A Lawsuit

How Far Are We from Owning Self-Driving Cars?

Self-driving cars are already here. California alone has approved 111 car models for testing.

Google, Uber, Tesla, Ford, and others are on a “fast track” to mass manufacturing self-driving cars. A BI Intelligence report predicts that there could be as many as 10 million self-driving cars on the road by 2020.

We may be a few years away from mass production, but drivers are interacting with self-driving technology right now. Since October 2014, Tesla Model S (and more recently, Model X) owners have served as beta test drivers for its autopilot software.

Companies like Uber and Google are actively testing cars that will eliminate the need for driver intervention altogether.

Uber is making headlines for its self-driving cars in Pittsburgh, San Francisco, and Arizona, which have already been filmed speeding through red lights and crashing into other vehicles.

Though Uber and Google still have licensed drivers behind the wheel who are ready to intervene if necessary, neither company intends for them to be a permanent feature. The companies plan to manufacture vehicles without steering wheels or pedals, removing the option for passengers to intervene, arguing that this would eliminate automobile accidents caused by human error.

We Aren’t Ready to Ditch Human Drivers Yet

The technology required for fully autonomous driving is extremely complex—which is why human intervention is still necessary. It isn’t enough for the software to detect objects in front of them; the software must be able to mimic human reasoning and even ethics.

At Stanford, engineers and philosophers are partnering to design algorithms that address these concerns. Some of the decisions they encounter are complex ethical dilemmas: Should a self-driving car prioritize the safety of a child that runs into the street over the safety of its passengers?

When humans are behind the wheel, they simply react to these situations—hitting another car, for example, rather than swerving into a mass of pedestrians. Self-driving cars are being programmed to make these decisions right now though, which in some cases could look more like premeditated homicide than a random reaction.

Self-Driving Cars Still Lack Visual Intelligence

In addition to developing “moral compasses” for self-driving cars, the software must become “visually intelligent.”

In May 2016, Tesla’s autopilot failed to detect an 18-wheeler due to the truck’s height and a glare. This deadly mistake has since been addressed by a software update which no longer relies just on a car’s cameras to detect obstacles, but also on its onboarding radar technology.

Tesla’s autopilot feature failed to detect an 18-wheeler due to the truck’s height and a glare.

Tesla described the difficulties of obstacle detection, explaining that the material and angle of an object can affect whether or not something like a soda can is registered as a trivial object in the road or whether it requires the car to slam its brakes.

To improve the “vision” of autonomous cars, researchers at Princeton and Stanford launched ImageNet: a repository of 14 million categorized images. However, image recognition is where this software stands right now. Experts say the software is still far from becoming visually intelligent like humans, complete with reasoning and contextual knowledge.

This is why self-driving cars are generally only operating in specific environments. The Uber cars in Pittsburgh, for example, are limited to a 12-square-mile radius downtown (an area for which Uber has extremely detailed maps).

Tesla’s Model S Involved in Two Deadly Accidents

Though still in test mode, Tesla’s Model S autopilot is linked to two deaths so far. The first accident occurred in China in January 2016, killing driver Gao Yaning. It isn’t clear whether or not autopilot was enabled at the time of the crash. However, a lawsuit was filed against Tesla on behalf of Gao Yaning’s father.

Four months after that accident, Joshua Brown’s Model S crashed into an 18-wheeler while the autopilot feature was engaged. Tesla stated that the car didn’t detect the truck because of its height and a glare from the bright sky. This is the first confirmed death caused by the Model S autopilot.

Tesla Model X Collides with Semi-Truck

Some Tesla owners argue that the autopilot feature in its current state offers a false sense of security on the road.

A Tesla Model X driver crashed into the back of a semi-truck in California while autopilot was engaged. The semi had swerved into his lane unexpectedly while the driver had his eyes off the road. Though the driver heard the collision warning beep, it sounded as the car hit the truck.

Thankfully, the driver walked away with only a stiff neck. He gave an account of the accident on Facebook, with a warning to other Tesla owners: “While I’m grateful that I’m alive, I just want to put this on notice to not get overly comfortable with the autopilot and that there are still many flaws and unaccountable situations.”

Fight Back

Uber Crash Halts Self-Driving Pilot

In Tempe, Arizona, a self-driving Uber SUV flipped over after colliding with a car making a left turn in an intersection. Alexandra Cole, the driver who hit the Uber, said in her testimony that she saw the SUV “flying through the intersection” at the last second. The Uber engineers remarked that they failed to see her because of a blind spot caused by traffic in the southbound left lane.

While Cole was technically at fault for not yielding to oncoming traffic, Brayan Torres remarked in his witness testimony that “it was the [Uber’s] fault for trying to beat the light and hitting the gas so hard.”

The accident underscores the complexities of interactions between computer-operated cars and human drivers, whose decision-making rarely fits neatly within an algorithm. Uber took its self-driving cars off the road for a few days after the incident, but has since resumed testing.

In the Future, Hackers May Cause Car Accidents 

Developers of self-driving cars largely market the products based on their ability to drastically reduce accidents. However, this doesn’t mean that automobile accidents will be a thing of the past. Instead, hackers may cause crashes in the future.

In September 2016, Chinese security researchers uncovered vulnerabilities in Tesla’s security systems which allowed them to unlock car doors, open sunroofs, and reposition seats. Tesla resolved the issue ten days later with a security update.

Though seemingly harmless now, it’s easy to imagine hackers causing cars to drive off the road or crash into other vehicles.

Were You Involved in a Self-Driving Car Accident?

Classaction.com attorneys have extensive experience with automotive litigation, including lawsuits over Takata airbags, GM ignition switches, and Volkswagen emissions fraud.

If you were in an accident involving a self-driving car, we want to hear from you. Contact us for a free, no-obligation case review.