How safe are autonomous cars?
Driving a car is an extremely complex activity – somehow we manage to control all that technology and keep our eyes on the many things going on around us. We manage to do it well enough that there are not many accidents. But how much can we rely on software to control a car? It ought to be smarter, but does that make driving safer?
A fatal accident in May 2016 involving a Tesla car on autopilot made it clear that we can’t fully trust it yet, even if, as Tesla CEO Elon Musk says, one fatality in 130 million miles of autopilot relates well to one fatality in 94 million miles of conventional driving. One of the arguments for autonomous driving is increased safety – computers don’t get tired or angry, and they don’t check their smartphones.
There will have to be very high security with a need for standards and encryption.
Training the software
Tesla says its latest vehicles already have the hardware to enable fully autonomous driving: eight cameras, twelve ultrasonic sensors, radar and forty times more computing power than in previous models. The software will follow, having learnt from the data that the cars gather, both during assisted driving and manual control, so that eventually, drivers will only have to switch the power on, set a destination and relax until it’s time to get out.
According to James Hodgson, industry analyst for ABI Research, “Tesla plans to sell 500,000 of its new Model 3 mass-market models a year. If each of them does 12,000 miles a year, that’s 6 billion miles of data with which to train the software. Mass market means mass data.”
Human help still needed
The cars will learn from each other – the camera on your car may interpret a bridge crossing the road as an obstacle if the road dips sharply underneath it, but if human drivers don’t apply the brakes there, then the autonomous car won’t either. “Drivers use their senses and their experience and then process them both,” says Nicolaj Stache, Professor for Automotive Systems Engineering at the University of Heilbronn in Germany. “A vehicle may have many cameras and sensors but it needs to gain experience, and for that it needs to use human experience too.”
But if data from cars is networked, then hacking becomes possible: “That’s a challenge,” Stache admits. “There’ll have to be very high security with a need for standards and encryption.”
When avoiding a crash, will it be told to ensure the smallest number of victims or will it save its owner at any cost?
If a car is programmed, it will have to be told how to behave in circumstances where currently the driver reacts in an unconsidered split second – as when avoiding a crash. Will it be told to ensure the smallest number of victims or will it save its owner at any cost? Will it be told to drive into elderly people to save children? Currently, responsibility for the vehicle reverts to the driver in a crisis, but truly autonomous vehicles will have to work it out for themselves – which means someone will have to give them a value system.
“Those are ethical decisions which have to be taken at a higher level,” says Stache. “They can’t be decided by the programmers.”
Most manufacturers are steering clear of fully autonomous driving. Hodgson says that may be because existing manufacturers are used to selling individual cars to individual people, and that may be a business model which won’t work with fully driver-less cars.
“If I can call an autonomous vehicle to take me where I want to go, why should I own a car which sits 23 hours a day in a parking space?” he argues. “A few manufacturers, like BMW, Ford, Renault-Nissan and Tesla have said they want to introduce higher levels of autonomy by 2020. They’re actively planning to transition from vehicle sellers to mobility providers.”
Cars have always been sold on their emotional appeal, but autonomous cars will have to be programmed to drive cautiously. That means the adrenalin-fueled excitement experienced by drivers may be a thing of the past as safe and comfortable passenger travel becomes the norm. Even if the software sometimes fails, that should still be safer than the cocktail of emotion, physical limitations and distraction which makes up the human driver.
By Michael Lawton
How intelligent is artificial intelligence? Milind També from the University of Southern California discussed this topic in our previous article.
3 Responses to “How safe are autonomous cars?”
People are always asking is autonomous driving save. I would rather introduce the idea of asking a “relative” question instead of an “absolute” one. Hence ask: “are autonomous cars safer than cars driven by humans?”. My assumption would be autonomous driving adds a new quality of predicting behavior and relying on “something” which can’t become emotional, tired or distracted. Even though autonomous driving needs to continue its steep learning curve at the end it will make our life safer as many other assistance gadgets which are already used to have.
Good point Helmut! I think version 1.0 of anything will always have bugs and problems to fix. We are still in the very early stages of driver-less cars. But yeah the safety should improve 10-20 years from now with the evolution of tech. Also in the event of an impending collision, crash safety systems along with AI assistance should be more advanced with protecting occupants.