Seppuku At GM’s Cruise Division Shows The Necessity Of Transparency, But That May Not Be Enough – CleanTechnica

As we’ve written about recently, GM’s Cruise autonomous vehicle and robotaxi division is in crisis. After a particularly nasty crash involving one of its vehicles and a pedestrian, the company lost its license to run robotaxis in California and has since put production of its next-generation vehicles on hold.

Since then, the company’s CEO resigned in disgrace, and the key lesson from this example of corporate seppuku is that transparency in autonomous vehicle is key.

What The Press Release Didn’t Tell Us About The Accident Investigation

In prior coverage, it seemed that maybe Cruise wasn’t at fault for the accident. A human-driven vehicle hit a pedestrian, tossing them into the path of the Cruise robotaxi. The Robotaxi did as designed and moved itself to a safe stopping point after the collision, but the computer wasn’t aware that it was dragging the pedestrian along for this short ride.

So, it looked like a case of bad optics (dragging a pedestrian is a frightening piece of mental imagery) rather than bad safety practices at Cruise. The company made it seem like this was a highly unusual situation caused by a human driver and that Cruise was a victim of sorts, but that the company needed to improve the failsafe programming to not hurt people in a maneuver to move the car out of the way after a collision.

What wasn’t apparent at the time was why both California authorities and Cruise itself was suddenly so dim on future prospects over a collision caused by a human driver. But, it wasn’t the accident itself as much as Cruise’s interaction with investigators that got them into so much trouble.

Instead of being fully transparent about the collision, Cruise showed investigators only partial video of the accident. When the state figured out that information had been withheld, and that the accident was far worse than it was initially described as. Even in press releases, it wasn’t made very clear that the victim was pinned under the vehicle and then drug (in the limited space under a Chevy Bolt) at 7 MPH for 20 feet.

The victim wasn’t killed, but very easily could have been, and this more full picture is something that probably terrifies the average person. The idea of a soulless machine (however cute its name may be) dragging a person underneath it and being completely unaware of what it’s doing sounds frightening. Add to this the fairly quick acceleration at low speeds a Bolt is capable of (even if only to 7 MPH), and it’s just frightening to imagine.

Video of the incident has been shown to at least one media outlet, but hasn’t been released publicly yet.

The Transparency Problem

It seems pretty clear that there is both a safety problem and a problem with not telling the public (and the regulators it indirectly put there) what’s really going on. This mixture of horror and darkness is something that the human mind simply doesn’t tolerate well.

In an episode of The Autonocast from late October, the problem is discussed in greater depth. Not only were details of the incident withheld, but the thinking behind public autonomous vehicle testing and development is just as opaque to the public.

I know not everyone working in the AV industry thinks this way, but it seems that the idea behind this is that risks are worth the rewards. When it comes to the risks, there’s no way to really and truly eliminate them. Robotaxis have gotten pretty good, but they also still make some big mistakes. As their fielding scales, the idea is that the real-world testing will lead to further improvements. If the vehicles are allowed to improve enough, they’ll be safer than human drivers, which would lead to lives saved.

This utilitarian perspective, where the needs of the few (people hurt and killed during development) are less important than the needs of the many who could be saved later sounds good on the surface. If we reduce people to numbers, trading small numbers for larger numbers seems like the logical choice, right? But, when we hear (and maybe eventually see) the chilling details of what happened to one of the few, it makes us think twice about whether that person was a number who we could so flippantly trade away for future saved lives.

Edward Niedermeyer (one of the people on the above-linked podcast and author of this book) says that this utilitarian perspective is something that robotaxi companies really need to be more transparent about. When the public doesn’t understand the cold calculus of it, it becomes a lot more shocking when they are suddenly exposed to it.

At the same time, though, the promises of a safer future are not as clear as they once seemed. Companies like Tesla, GM, and Google have been promising autonomous vehicles that are safer than a human for years now. In the case of Tesla, predictions of the future (always next year or soon) have come and gone repeatedly. So, the number of people at risk for the promised autonomous utopia continues to go up while doubts about whether we get there at all are also going up.

Between this lack of public confidence and other factors we probably still aren’t aware of, the situation for Cruise is bad. Various outlets are reporting that morale at Cruise is way down. Its leader is down enough on the future of the company that he took the honorable way out and resigned instead of trying to salvage it. GM is also being highly cautious about the company’s future.

Really, this is a story that’s about a lot more than Cruise. The Silicon Valley idea that you need to move fast and break things works OK when the biggest risk is whether some app on your smartphone will work as intended. When safety is at stake, the public seems to have a much lower tolerance level for that. Transparency is definitely important, but the underlying concept of development might really be so incompatible with human nature as to be something that has to be hidden to survive.

Featured image by GM Cruise.