Self-Driving Car Class Action Lawsuit

Self-driving cars are marketed as a safer alternative to human drivers. But without the same reasoning and intelligence of humans, they can cause fatal accidents.

(Updated Nov. 12, 2018)

Self-driving cars are no longer fantastical inventions of the future. They are on the roads and already causing accidents.

There could be as many as 10 million self-driving cars on the road by 2020.

Marketed as a safer alternative to human drivers, self-driving cars still lack the reasoning and contextual knowledge of humans. With self-driving test cars on the roads throughout the country, this poses a huge safety concern. In fact, two deaths are already connected to the Tesla Model S autopilot.

If you or a loved one were in an accident involving a self-driving car, contact our attorneys for a free, no-obligation legal review. You may be eligible for a lawsuit.

File A Lawsuit

How Far Away Are We from Owning Self-Driving Cars?

Self-driving cars are already here. California alone has approved 111 car models for testing.

Google, Uber, Tesla, Ford, and others are on a “fast track” to mass manufacturing self-driving cars. A BI Intelligence report predicts that there could be as many as 10 million self-driving cars on the road by 2020.

We may be a few years away from mass production, but drivers are interacting with self-driving technology right now. Since October 2014, Tesla Model S (and more recently, Model X) owners have served as beta test drivers for its autopilot software.

Companies like Uber and Google are testing fully autonomous cars that will eliminate the need for driver intervention altogether.

Though Uber and Google have licensed drivers behind the wheel of their test cars who can intervene if necessary, neither company intends for them to be a permanent feature. The companies plan to manufacture vehicles without steering wheels or pedals, removing the option for passengers to intervene, arguing that this would eliminate automobile accidents caused by human error.

Are We Ready to Ditch Human Drivers?

The technology required for fully autonomous driving is extremely complex—which is why human intervention is still necessary. It isn’t enough for the software to detect objects in front of them; the software must be able to mimic human reasoning.

At Stanford, engineers and philosophers are partnering to design algorithms that address these concerns. Some of the decisions they encounter are complex ethical dilemmas: Should a self-driving car prioritize the safety of a child that runs into the street over the safety of its passengers?

When humans are behind the wheel, they simply react to these situations—hitting another car, for example, rather than swerving into a mass of pedestrians. Self-driving cars are being programmed to make these decisions right now though, which in some cases could look more like premeditated homicide than a random reaction.

How Well Can Self-Driving Cars Detect Obstacles?

In addition to developing “moral compasses” for self-driving cars, the software must become “visually intelligent.”

In May 2016, Tesla’s autopilot failed to detect an 18-wheeler due to the truck’s height and a glare. This deadly mistake has since been addressed by a software update which no longer relies just on a car’s cameras to detect obstacles, but also on its onboarding radar technology.

Tesla’s autopilot feature failed to detect an 18-wheeler due to the truck’s height and a glare.

Tesla described the difficulties of obstacle detection, explaining that the material and angle of an object can affect whether or not something like a soda can is registered as a trivial object in the road or whether it requires the car to slam its brakes.

To improve the “vision” of autonomous cars, researchers at Princeton and Stanford launched ImageNet: a repository of 14 million categorized images. However, image recognition is where this software stands right now. Experts say the software is still far from becoming visually intelligent like humans, complete with reasoning and contextual knowledge.

This is why self-driving cars are generally only operating in specific environments. The Uber cars in Pittsburgh, for example, are limited to a 12-square-mile radius downtown (an area for which Uber has extremely detailed maps).

Has Tesla's Autopilot Caused Any Car Accidents?

Some Tesla owners argue that the autopilot feature in its current state offers a false sense of security on the road.

Tesla has already been hit with multiple lawsuits alleging its autopilot failed, causing collisions that have severely injured and in some cases killed drivers and passengers.

Tesla’s Model S autopilot is linked to two fatalities so far. The first accident occurred in China in January 2016, killing driver Gao Yaning. It isn’t clear whether or not autopilot was enabled at the time of the crash. However, a lawsuit was filed against Tesla on behalf of Gao Yaning’s father.

Four months after that accident, Joshua Brown’s Model S crashed into an 18-wheeler while the autopilot feature was engaged. Tesla stated that the car didn’t detect the truck because of its height and a glare from the bright sky. This is the first confirmed death caused by the Model S autopilot.

Some Tesla owners argue that the autopilot feature in its current state offers a false sense of security on the road.

A Tesla Model X driver crashed into the back of a semi-truck in California while autopilot was engaged. The semi had swerved into his lane unexpectedly while the driver had his eyes off the road. Though the driver heard the collision warning beep, it sounded as the car hit the truck.

A Tesla Model S collided with a stationary Ford Fiesta at 80 mph.

Thankfully, the driver walked away with only a stiff neck. He gave an account of the accident on Facebook, with a warning to other Tesla owners: "While I’m grateful that I’m alive, I just want to put this on notice to not get overly comfortable with the autopilot and that there are still many flaws and unaccountable situations."

In October 2018, Shawn Hudson's Tesla Model S collided with a stationary, disabled Ford Fiesta at 80 mph on the Florida Turnpike. Autopilot was engaged, but failed to detect the other vehicle. Mr. Hudson allegedly suffered permanent injuries as a result of the crash.

A lawsuit filed by our attorney Mike Morgan alleges that Tesla duped consumers like Mr. Hudson into believing that the autopilot program was safer and required less oversight than it actually required.

Fight Back

Have Other Self-Driving Car Models Crashed?

The Chevy Bolt began to merge into the left lane but unexpectedly merged back, colliding with a motorcyclist in the process.

The first U.S. self-driving car lawsuit involves one of GM's test Chevy Bolts in San Francisco. The car was in cruise automation when it collided with motorcyclist Oscar Nilsson in San Francisco on December 7, 2017.

The Chevy Bolt began to merge into the left lane but stopped and moved back into its original lane, colliding with Mr. Nilsson in the process. He fell off his motorcycle and sustained injuries to his neck and shoulder.

In Tempe, Arizona, a self-driving Uber SUV flipped over after colliding with a car making a left turn in an intersection. Alexandra Cole, the driver who hit the Uber, said in her testimony that she saw the SUV "flying through the intersection" at the last second. The Uber engineers remarked that they failed to see her because of a blind spot caused by traffic in the southbound left lane.

While Cole was technically at fault for not yielding to oncoming traffic, Brayan Torres remarked in his witness testimony that, "It was the [Uber's] fault for trying to beat the light and hitting the gas so hard.”

The accident underscores the complexities of interactions between computer-operated cars and human drivers, whose decision-making rarely fits neatly within an algorithm. Uber took its self-driving cars off the road for a few days after the incident, but has since resumed testing.

Have There Been Any Self-Driving Car Fatalities?

"How much leeway are we willing to give the Ubers, the Waymos, and all the autonomous vehicles in saying we’re OK with being your guinea pigs, we’re OK with you experimenting on us?”

On March 20, 2018, a self-driving Uber SUV allegedly struck and killed a pedestrian in Tempe, Arizona. Elaine Herzberg was walking her bike across the street at night when the car hit her. It is the first pedestrian fatality associated with a self-driving car.

“We are in these cars’ training ground, really,” Mike Morgan, our product liability attorney, told Law360. “Every time it encounters something like that, it will never happen again, likely. But how much leeway are we willing to give the Ubers, the Waymos, and all the autonomous vehicles in saying we’re OK with being your guinea pigs, we’re OK with you experimenting on us?”

Can Self-Driving Cars Be Hacked?

Researchers uncovered vulnerabilities in Tesla's security systems which allowed them to unlock car doors, open sunroofs, and reposition seats.

Developers of self-driving cars largely market the products based on their ability to drastically reduce accidents. However, this doesn’t mean that automobile accidents will be a thing of the past. Instead, hackers may cause crashes in the future.

In September 2016, Chinese security researchers uncovered vulnerabilities in Tesla's security systems which allowed them to unlock car doors, open sunroofs, and reposition seats. Tesla resolved the issue ten days later with a security update.

Though seemingly harmless now, it’s easy to imagine hackers causing cars to drive off the road or crash into other vehicles.

Were You Involved in a Self-Driving Car Accident?

Our attorneys have extensive experience with automotive litigation, including lawsuits over Takata airbags, GM ignition switches, and Volkswagen emissions fraud.

If you were in an accident involving a self-driving car, we want to hear from you. Contact us for a free, no-obligation case review.