The German Research Center for Artificial Intelligence (DFKI) and TÜV SÜD are partnering to create a roadworthiness test for AI algorithms used by autonomous vehicles. They will research how AI systems learn, and develop an open platform, Genesis, for the validation of AI modules.
AI systems help autonomous vehicles to master the enormous number of potential traffic situations that may occur – a number that TÜV SÜD’s experts have estimated at 100 million for each fully automated driving function. Such systems do not react in a deterministic manner, and are thus not exactly predictable; instead they learn from traffic situations in a process known as deep learning, and draw their own conclusions. To ensure their decisions are always favourable, TÜV SÜD plans to validate and certify the underlying algorithms.
The aim is for users of the new Genesis platform to upload their data and modules for testing; systems that pass will be awarded the appropriate TÜV SÜD certificate for functional safety.
“The industry has shown enormous interest, with many companies already gearing up to take part in Genesis,” said Dr Christian Müller, head of autonomous driving and (at?) the Competence Center for Autonomous Driving at DFKI.
The difficulties involved in developing methods for the safety of AI systems are obvious. AI systems use the available data to draw their own conclusions, and successively learn from each encounter. The probability that a vehicle will react correctly thus increases over time.
The experts’ objective is to be able to evaluate the system’s learning progress in a process similar to the theory examination of a driving test. “The deep learning method has shown surprisingly good results in practice, yet still nobody knows how the process actually works,” said Dr Houssem Abdellatif, global head of autonomous driving at TÜV SÜD. “This is what our joint project will now investigate.”
The partners say a practical test is also essential before autonomous vehicles are given the green light for road use. “Once we understand exactly what conclusions the systems draw, we can intervene to control their learning in a targeted manner,” said Dr Abdellatif. “We need to know not just whether a vehicle will brake; but why.”
Data from virtual traffic situations will be used to monitor and correct the algorithms’ learning process. The final result will be a certificate, or driving licence, for an algorithm, which confirms it is safe for road traffic.
March 6, 2018