Siddartha KhastgirSiddartha Khastgir is the Head of Verification & Validation for Connected & Autonomous Vehicles (CAV) at WMG, University of Warwick. He leads several collaborative R&D projects with industrial and academic partners nationally and internationally. His research focuses on generating safety evidence and arguments, test scenario generation, simulation-based testing, and safety of AI systems. Leveraging the cross-domain nature of safety, he is also involved in safety research in aviation, marine and healthcare. Siddartha is an active member of various national and international standardisation and regulatory groups, including ISO, SAE and ASAM. Currently, he represents the UK on several ISO technical committees and is the lead author for two new ISO standards for aspects of automated driving systems. He sits on the United Nations Economic Commission for Europe (UNECE) committees on safety of automated driving. Prior to joining WMG, Siddartha was with FEV GmbH in Germany, leading automotive software development and testing for series production projects. He has received numerous national and international awards for his research contributions, including the prestigious UKRI Future Leaders Fellowship in 2019 focused on safety evaluation of CAVs.
Demystifying safety assurance of automated driving systems: The myths and the reality
This talk will focus on revealing the myths and the reality associated with realising a scalable safety assurance process for automated driving systems. While there is widespread acknowledgement for the need for an Operational Design Domain (ODD) based safety assurance framework, various aspects of the framework are either misrepresented or not considered at all. This talk will focus on some of these aspects, uncovering myths (and the reality) associated with simulation-based testing, ODD definition, and wider scenario metrics. While simulation-based testing remains a key component of the framework, more focus needs to shift in understanding ODD definitions and validity of simulation to give confidence in the output of the testing process.