Can Driver’s Ed Improve the Safety of Autonomous Cars?
Self-driving cars run on highly advanced tech, with systems so complex, you can truly call them intelligent. But when the intelligence of the car makes it impossible to know exactly what it’s thinking, how can we be sure it will make the right choices on the road? How can we guarantee it will operate safely?
That’s what’s a team of researchers at USC have been working to figure out.
The problem with intelligence
Researchers at USC, working in partnership with Arizona State University, have just published a study on the perception algorithms that self-driving cars use to sense what’s around them so they won’t crash into things.
According to ECN Magazine, the team has “a new mathematical method” that can “identify anomalies or bugs in the system before the car hits the road.”
But before we can cover why this breakthrough is so important, we need to point out what the engineers of these systems have been up against.
The complexity of an autonomous car’s perception algorithms – which are based on “convolutional neural networks, powered by machine learning, a type of deep learning” – makes it impossible for any human being to fully understand how, precisely, they make predictions. Unlike a simple machine, whose function is comprehensible to the human mind, these systems work less like a machine and more like a brain.
A phenomenal invention, when you think about it – but also a potential hazard, especially in safety-critical situations, which driving certainly is.
In fact, according to Anand Balakrishnan, the study’s lead author, improving these algorithms is “one of the foremost challenges for autonomous systems,” ECN said.
Driving tests for autonomous cars
This is where we come back to the importance of this breakthrough. With preemptive testing, we can give a car’s perception system a dry run before it hits the road – and if it makes mistakes, we can send it back to school “to further train the system,” Balakrishnan said.
In other words, instead of handing over the keys and wishing the car good luck, we can make it pass a driving test first … just as human drivers have to do.
How do self-driving cars learn?
If you’re wondering what driver’s ed for a car might look like, we don’t blame you. Insurance Journal said that these vehicles need to be “fed huge datasets of road images before they can identify objects on their own.”
That’s step one. Step two is working on the car’s ability to accurately recognize false positives. If it gets this part wrong, it could see a pedestrian, decide the pedestrian is a false positive, and hit them anyway – as one self-driving car did in a fatal accident last March.
This is why the USC researchers saw the need for additional training: it’s not enough to stop with step one; autonomous cars need the opportunity to test what they think they know – and correct what they get wrong – before getting a chance to make judgement calls that could cost lives.
This careful approach to a new system, in which the design and function are iteratively fine-tuned to maximize performance swiftly and effectively, is something we at Silvervine can really get behind. In fact, that’s exactly how we approach our policy administration systems. For more information, request a demo.