There has never been a fad held in as much secrecy as the self driving car craze. It's been 15 years of startups, yet no source code or algorithm has ever been released by anyone working on a full sized self driving car, while past generations were quick to release source code for quad copters, web browsers, operating systems, & video codecs. Every Stanford student is working on some self driving car project, but with no place to start. It's like thousands of Linuxes being written from scratch, with no-one making any progress beyond what can be done in the 5 years between graduation & a better living through flipping houses.
Tried enhancing the only released footage of the Uber crash, the 1st fatality caused by a self driving car, but it was a downscaled, compressed copy of a video. The dynamic range was too poor to resolve any details in the shadow, where a human would have seen details. The current generation has an attitude that the car did everything possible & blindly accepts the company claim that the secret autopilot was extremely sophisticated. To someone programming autopilots for years, it didn't seem like a very capable autopilot at all.
The car was going too fast to stop in the visible distance, which for the car could have included the LIDAR range. Human drivers are told to go no faster than they can stop in the visible distance. Someone with no ivy league degree would have at least used a better camera in a self driving car or used brights to overcome such a lousy camera. It surely detected the biker with LIDAR but lacked any ability to predict where moving objects would be in the future. Perhaps a former algorithm used LIDAR, but the current fad is detecting objects from lousy video using neural networks, so that's what they used. Maybe there were so many false positives from a more sophisticated algorithm, they ended up doing nothing at all.
The industry couldn't do any worse by being less secret, but Airbus & Boeing don't share any information about their autopilots & manage to not kill all the humans, most of the time. A fatality caused by a machine has a much bigger impact than a fatality caused by a human driver. The odds might be less of getting hit by a self driving car, but it's like getting sucked into a tree shredder instead of getting shot by a human. There's always a feeling the human can avoid what the machine can't.
Comments
Hi Jack,
do you have a link to "..downscaled, compressed copy of a video."?
A lot of discussion here in GE about video is manipulated.
Looking at the crash video, something was obvious very wrong with the system. Lidar and radar should both have detected the person long before she got in front of the car regardless of lighting conditions, and finally the visual system should have reacted as a last resort towards the end. But neither of them did. So something was off (maybe even literally?) with the system on that car. After all we are talking about Uber here..
But there is a flip side. The argument that autopilots have to be perfect before they can be used is flawed. To be useful and overall save human lives, the autopilots only has to be better then the average human driver. And looking at car crash montage videos on youtube, the bar for that dosent seem to be set particularly high..
Wired has a great article that is related to this topic (in short - humans are pretty bad drivers, and they are even worse at supervising self driving cars): https://www.wired.com/story/uber-crash-arizona-human-train-self-dri...
Thing is, you can take your own measures to avoid tree shredders. If you choose to jump into the feed hopper, you're not really acting rationally and neither was this cyclist-cum-pedestrian.
If the car had been being driven by a distracted texting driver, the result would have been the same (or worse) and the accident wouldn't have made international news.