Products on Show

Futureproofing effective sensor validation
Ibeo Automotive Systems

Now that ADAS and AD systems have arrived in the compact class and manufacturers are vying for Level 3 certifications, no OEM can ignore modern driver assistance and automated driving systems.

However, it is becoming increasingly clear that validation of these elaborately developed solutions is expensive and involves risks.

On the one hand, there is no doubt that a precise check of the sensors and comparison with reality is necessary. Drivers need to be able to rely on their blind spot assistant to reliably detect whether a hazard is where they intend to go. Seventy percent accuracy is not enough in such cases.

But how does the project team go about comparing the functionality of hardware and software – of radars, lidars, cameras and algorithms – with reality?

So far, the process has been laborious: after extensive test drives have been conducted, system performance has to be compared with reality and then optimized. To do so, individual objects (people, vehicles, road markings) are marked or ‘labeled’ by hand. This has been an expensive and unreliable process, requiring hundreds of people to manually edit thousands of hours of test-drive recordings. Up to 5,000 hours of post-processing can be necessary to ‘label’ all objects frame by frame. But this often monotonous work also means that the results must be checked again afterward because human error cannot be ruled out.

The next logical step was to outsource this work and improve it with algorithms. This raises the question of data: can manufacturers in good conscience outsource their test drive data to an external service provider who might also work for competitors but who is the only one with the know-how (as well as the data)? Sensor and algorithm data is relevant to competition. In the long term, handing over the data to a service provider appears to be a dead end.

Using algorithms to automatically recognize static and dynamic objects makes sense to reduce costs. But it is also necessary to keep the capabilities for evaluation and the data itself within the company. There are a variety of means to do just that.

First, the interaction of the individual components within the process chain must be ensured. The output of the preceding component must serve as input for the subsequent component. This requires scalable, cloud-compatible software for the automated creation of labeled reference data as well as editors for further processing of the reference data into ground truth data.

Second, many terabytes of data material are generated during the run-in of test scenarios, which can only be effectively evaluated if the system can process this amount in a piecemeal and scaled manner.

Third, the sensors of the reference system must be more precise than the sensors to be referenced. At the same time, the reference system must be flexibly adaptable to the sensors under test, regardless of the technology.

Finally, the reference system should be regularly expandable or adaptable to further technical developments.

Booth: 6334

Back to News