First self driving car fatality

Or someone having a stroke driving an automated vehicle that lacks that ability to recover gracefully when such things happen because the developers are blinded by their arrogance.

Which is an indication that it is time for the controller to slow the vehicle to a less-than-lethal speed.

1 Like

Not picking on you.

Devil’s advocate: How is this different when having a stroke or heart attack when cruise control is engaged? Both would require active intervention to turn off and navigate to a safe stop absent a EG & EEC connected to detect when driving. I don’t think there is practical difference. Don’t have a solution and I think this is a rare occurrence anyway.

I personally believe self-driving cars a ways off. Example: Being unable to see the broadside of a semi-truck is a good example (Tesla). I read comments like “European trucks have a side panel between the wheels.” and “It was a bright day and the truck was white” Both are true, but Hello, this is the US where the vast majority of trucks don’t have them and the design should have acknowledged and addressed those facts. Lots of places in the US have bright days - like so California where these are designed and built. Also white trucks are common. The cars are supposed to be adapting and detecting to the roads and its hazards - not the roads and other vehicles to the car’s design inadequacies.

The technology will get there at some point I have no doubt, but selling something advertised as “Autopilot” “Self-driving” but claiming - “Gosh, we’re kidding, you have to still sit and watch as if you were driving.” and think drivers are going to not rely on it is delusional. People are going to go from “Distracted Driving” to “Not Driving”.

Cruise Control isn’t marketed as self-driving, simply as a convenience for long stretches at a given speed. Nobody expects it to do anything more than change throttle position depending on speed. That’s the difference. Like you said, marketing something as self-driving creates the expectation that the system can react to more than just changes in vehicle speed with more sophistication than just opening and closing the throttle.

My question isn’t how is it marketed, but what is the difference if the driver is incapacitated? As practical matter, if cruise control is on and you pass out, the car is going to continue on at speed. Even if you were steering at the time of the stroke and are incapacitated before you can actively disengage cruise control by touching brake pedal or cruise controls the vehicle continues on This is similar to not taking of a self-piloting vehicle if you have a stroke or heart attack.

Actually, it could be argued that cruise control is the more dangerous of the two. If the car continues on with you not controlling it, absent an error, your carcass would arrive at it’s destination in good shape for an open casket funeral.

It does not have to be (at least in some cases). For example, my truck is fly-by-wire. That means there is a controller that knows if I am actively steering or not. If cruise control is engaged and I stop manipulating the steering wheel a reasonable choice would be to deactivate cruise control so the vehicle is not powered into the crash.

In this specific situation, the controller recognized there was trouble (several visual cues and one audible cue) but failed to respond in a reasonable way (at least slow the vehicle).

I was using your example of a heart attack or stroke assuming no further operator input.

If new cruise controls systems do monitor input, then if it cut out, the vehicle would slow to a halt or creep, hopefully before a higher speed impact due to lack of steering input. But most cruise controls, at least ones I’ve used, don’t have active input detection. If they do that now, that is a good thing, so even if the person falls asleep it would detect it and allow the car to slow down.

You have both sides of the argument. If it slows down it’s likely to stop in the highway and likely get hit. If it keeps going it will eventually hit something.
There was an 18 wheeler last month that stopped in the middle of the highway because “mother ship” lost power. That was self driving.

1 Like

@Photomancer has already dived into this going the same direction I was going to go - there are a lot of corner cases to address and requiring a level of attentiveness that is all but the same as driving the car without actually driving the car while it’s otherwise seemingly driving itself is not a good strategy.

I already spoke to the problem of how Tesla’s “autopilot” works vs how it’s perceived:

Marketing it as “autopilot” was probably an enormous mistake on Tesla’s part. While an aircraft’s autopilot typically isn’t as sophisticated as what the general public envisions the term to be on aircraft (varying degrees of sophistication from throttle/heading control to true navigation compensating for wind and other sources of error to pushbutton takeoff/landing but never obstacle avoidance), it’s still a dangerous term since the motoring public envisions a hands-off experience when it comes to anything that sounds like “self-driving” in cars.

The SAE has defined 5 levels of autonomy, more granular than the NHTSA’s initial 4 levels:

Level 0: Automated system issues warnings and may momentarily intervene but has no sustained vehicle control.

Level 1 (”hands on”): Driver and automated system shares control over the vehicle. An example would be Adaptive Cruise Control (ACC) where the driver controls steering and the automated system controls speed. Using Parking Assistance, steering is automated while speed is manual. The driver must be ready to retake full control at any time. Lane Keeping Assistance (LKA) Type II is a further example of level 1 self driving.

Level 2 (”hands off”): The automated system takes full control of the vehicle (accelerating, braking, and steering). The driver must monitor the driving and be prepared to immediately intervene at any time if the automated system fails to respond properly. The shorthand ”hands off” is not meant to be taken literally. In fact, contact between hand and wheel is often mandatory during SAE 2 driving, to confirm that the driver is ready to intervene.

Level 3 (”eyes off”): The driver can safely turn their attention away from the driving tasks, e.g. the driver can text or watch a movie. The vehicle will handle situations that call for an immediate response, like emergency braking. The driver must still be prepared to intervene within some limited time, specified by the manufacturer, when called upon by the vehicle to do so. The 2018 Audi A8 Luxury Sedan was the first commercial car to claim to be able to do level 3 self driving. The car has a so-called Traffic Jam Pilot. When activated by the human driver, the car takes full control of all aspects of driving in slow-moving traffic at up to 60 kilometers per hour. The function works only on highways with a physical barrier separating oncoming traffic.

Level 4 (”mind off”): As level 3, but no driver attention is ever required for safety, i.e. the driver may safely go to sleep or leave the driver’s seat. Self driving is supported only in limited areas (geofenced) or under special circumstances, like traffic jams. Outside of these areas or circumstances, the vehicle must be able to safely abort the trip, i.e. park the car, if the driver does not retake control.

Level 5 (”steering wheel optional”): No human intervention is required. An example would be a robotic taxi.

Tesla's "autopilot" is a level 2 system that unfortunately seems to be perceived as a level 3 or level 4 system.

Up until recently - I want to say last year - Tesla was using the same adaptive cruise control hardware as a number of other OEMs from a company called MobileEye … then pushed it to its limits and sold it as “autopilot”. I believe it used forward-facing cameras, radar, and ultrasonic sensors. The cameras and radar worked in concert in a form of sensor fusion so the system had sufficient granularity to stay between the lines, detect obstacles, and know rate of closure. The ultrasonic sensors let the system know if it was safe to change lanes. Throw in some additional image processing on the cameras and maybe it’s safe enough to do a little more than lane assist/adaptive cruise control … maybe.

Except the radar is a decent rangefinder/speed detector but otherwise doesn’t really image things. Ultrasonic sensors provide rather coarse 2D imaging. Realtime camera image processing is only so good even with cutting-edge hardware … which MobileEye didn’t exactly have. And Tesla’s in-house self-driving system post-MobileEye fallout doesn’t seem to have exceeded it.

All the prototype level 4 vehicles that Waymo and Cruise are testing (and presumably Uber) have cameras, radar, presumably ultrasonices, but most importantly LIDAR for continuous 3D imaging and trunks full of GPUs backed by beefy CPUs.

For all the hand-wringing over the subject, if the cause of the wreck had been putzing with their phone it wouldn’t be news and if they’d been messing with their radio we wouldn’t have heard about it. Heck, Tesla claims a nearly 4-fold reduction in deaths for “autopilot” equipped vehicles:

"In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers," Tesla wrote. For Tesla vehicles equipped with Autopilot, there's been one fatality every 320 million miles, Tesla said.
We can't conclude that Tesla's over-hyped level 2 autonomy is the root cause of that reduction, but it looks to have potential merit _in spite of_ driver shenanigans. Perfection is not a reasonable goal for self-driving cars so much as a marked reduction in the death rates on our roads and highways.

Machine vision systems can do amazing things with cameras, however exercising safe control of vehicles moving at highway speeds with just visible-light optical sensors seems to be something limited to the analog computing and sensing systems possessed by homo sapiens.

LIDAR is the most promising technology for offering machines fine-grained realtime imaging of their environment. But it’s presently expensive - the spinning-top systems seen on most level 4 prototype vehicles is made by Velodyne and costs something like $75k. There are cheaper solutions coming down the pike - DLP-like Microelectromechanical systems (MEMS), “flash” systems that do away with moving parts and steering, and phased-array; successful commercialization of any one of these is apt to reduce LIDAR system costs to the hundreds of dollars in a form-factor that doesn’t effect vehicle profile nor has the maintenance liability of high-speed moving parts.

There are of course still issues to be sorted out beyond sensors.

On the technical side, the algorithms for self-driving are still very much in development and the processing power it presently requires is extraordinary to the point of being prohibitive. I expect these things to be in beta for five-plus years until cost-is-no-object technical issues are worked out, then another period of time until the value engineering gets hashed out.

And even then, I see these things being the near-exclusive domain of fleets since they’re apt to be pricey, pricey beasts requiring aviation-like maintenance and certification regimes.

Some perspective on aeronautic autopilot:

The most advanced autopilots, barring ATC guidance, are able to take a plane from the start of the takeoff runway to touchdown based on a pre-programmed route in the Flight Management Computer, along with data about the aircraft’s payload and fuel, weather conditions, and some other factors. (But not to a safe stop at the end of the runway) They use a combination of extremely expensive high-precision GPS, inertial measurement equipment, distance measuring radios, barometric and radar altimeters, and a very wide array of sensors monitoring the aircraft’s condition to make throttle and course changes. These systems carry 7-figure pricetags and are desperately reliant on outside infrastructure, traffic control, and pilot intervention to navigate skies that are far less crowded than any road. There is no automatic collision avoidance, and although an instrument landing system can potentially touch a plane down, it can’t even extend or retract the landing gear or flaps, or arm autobrakes on its own, let alone thrust reversers. Under crosswind conditions, autolanding systems not only are incapable of landing the aircraft, but even incapable of initiating a go-around without pilot intervention.

Long and short, we’re expecting $80,000 cars to do more with their self-driving than we can squeeze out of a $80,000,000 jetliner’s autopilot. While self-driving might be on the horizon, it’s definitely not today. That hasn’t stopped people from treating their level 2 driving assistance as level 4 self-driving, though, because the public perception of autopilot is, it does everything.

My view on self-driving accidents at this stage is, it’s fair to assign blame to the carmaker in proportion to the expectation they set with their marketing. Basic cruise control is marketed solely as a means to maintain a set speed, with no expectation that it will do anything but keep your car going 70, so in that case, blaming the automaker would be unreasonable. Emergency braking assistance is sold with the expectation that it can respond to some emergency braking situations and prevent rear-end collisions. If such a car rear-ends another, the effectiveness of the system could definitely be called into question, and some blame assigned to the manufacturer for setting improper expectations. Tesla Autopilot carries a few in-car warnings that are relatively easy to bypass, but it’s sold as a proper self-driving solution and the name creates the expectation that the car will do everything. That being the case, while I think it’s stupid to trust the type of hardware in a self-driving Tesla with my life, I can understand how less informed people would have a reasonable expectation that the car really is self-driving based on the marketing. So I think it’s fair to assign significant blame to Tesla for these accidents. They’ve jumped the gun on their marketing and sold a product that needs constant human attention as an “autopilot” system which may be accurate in the most technical comparison, but it doesn’t match public perception. And now people are dead.

As an aside, I don’t know if I’ll ever be comfortable with self-driving cars. I’ve worked with too many software developers to trust their ability to create algorithms that can handle edge cases when my safety is at stake. Tunnel vision is strong in that field.

1 Like

I’m far more concerned about my fellow drivers than I am vehicle failure.

Here’s some NHTSA data from ~10 years ago - before smartphones consumed the overwhelming majority of the market - with regard to driver faults:

… and some similar-period data regarding vehicle failures :
image

Mechanical failures are ~2% of driver faults! These aren’t all crashes during the period, but a “nationally representative sample of crashes was investigated from 2005 to 2007”. Naturally, in an autonomous vehicle the automation replaces the driver as the weakest link and can be expected to experience failures, but presumably markedly fewer since automation focuses on its task continuously.

You’r right not to trust software developers in the 'Zuck model of move fast and break things (ship a patch tomorrow if you really break things) but boy howdy does the threat of cataclysmic litigation tend to get management and legal to clamp down on these tendencies. One does have to wonder if Uber’s much-contested acquisition of Otto, formed by a disgruntled ex-Google engineer was the ultimate root cause of the fatality in Arizona:

“We don’t need redundant brakes & steering or a fancy new car; we need better software,” wrote engineer Anthony Levandowski to Alphabet CEO Larry Page in January 2016. “To get to that better software faster we should deploy the first 1000 cars asap. I don’t understand why we are not doing that. Part of our team seems to be afraid to ship.” In another email, he wrote that “the team is not moving fast enough due to a combination of risk aversion and lack of urgency.”

Not long afterwards, Levandowski left Google to start a new company, Otto, which was quickly acquired by Uber. Levandowski was put in charge of Uber’s driverless car project, leading to a series of confrontations with regulators in California.

For all their faults, at least Waymo (Google) is seemingly breaking with Silicon Valley brogrammer culture and taking a cautious and methodical approach to developing autonomy. I feel that approaching the challenge as an engineering process rather than software development will result in a far superior product.

1 Like

In theory, self driving cars don’t have to be perfect, they just have to be better than human drivers and human drivers are terrible. In reality, the media which is in the business of selling drama will create sensational stories of every single injury or death that is in any way related to self driving cars.

3 Likes

I can live with human fallibility, as people have lived with human fallibility for millennia. What makes me uncomfortable is, for a computer to make decisions, it has to refer to its programming, and programming is just humans thinking in advance. And the humans doing the thinking aren’t interested in stable releases tested against every conceivable edge case, they’re developers.

For my part, I’m a site reliability engineer, with my career history coming from the operations half of that mix. My whole career has been built on cleaning up after dev’s mess because they launched the code release and went home. It’s enough stress when this results in our internal transaction system breaking for a couple hours, but it’s a whole new level of anxiety when people with this same attitude are telling computers how to drive a car.

Indeed we have. And we’ve paid prices for it through a long history of neglect, incompetence, and doing things we’re not particularly good at.

We’ve chopped collision / injury / fatality rates with seat belts, crumple zones, radial tires, antilock brakes, stability control, airbags, energy-absorbing interior components, seat belt tensioners, adaptive cruise control, lane keeping assistance, automatic parallel parking, front / rear collission detection/prevention systems, adaptive headlights (in some countries), etc. But the markets (in this case, a combination of consumer demand, government regulations, and insurers) insist on safer cars. And the human element is the persistent weak link. And a number of these things are forms of subordinate automation that dovetail into making the full enchilada.

Safety is but one of the pushes for autonomous vehicles. A short term benefit will be reclaiming time spent driving - read a book, get some work done, hold a conversation without the risks of distraction, etc. Longer term, the staggering amount of space and infrastructure dedicated to cars - parking lots ringing every destination, streets and highways designed around peak traffic flows - could reduce as self-driving cars can be summoned and dismissed on demand.

For all the credit that Captain “Sully” gets for US Airways Flight 1549, the fly-by-wire system also played a critical role in safely ditching that aircraft.

And let’s be honest - engineers, doctors, bureaucrats, and even laborers have killed people due to laziness, negligence, sloppy work, lack of oversight, etc - so software developers aren’t a spectacular risk of themselves.

Stereotypical programmers want to ship code sooner rather than later, conspire to discredit and fire testers - lesser humans in their perspective - with the temerity to say their code sucks rather than address its faults, and otherwise tend to resist actual engineering processes involving review and truly rigorous testing. This is why I spoke to the kinds of development processes likely to generate markedly safer autonomous systems as engineering rather than software development. You put hard-nosed adults in charge of such things with realistic expectations communicated both downwards and upwards. People like Anthony Levandowski straighten up or get shown the door.

For what it’s worth, I’m not sure I’d trust Uber to broker a ride between myself and one of the ‘independent drivers’ they exploit, to say nothing of beta-testing their autonomous vehicles.

1 Like

Uber situation suggests that full autonomy isn’t there yet.

Well that… and Uber f’d up with disabling the manufacture’s safety systems without fully acknowledging the dangers thus creating a situation of irresponsibility, voided warranties, and in general stupidity.

Basically, they should have kept the training wheels on while testing their system or at lease worn their safety gear.

1 Like

Arrogant brogammers f__k something up, film at 11.

2 Likes