Conference Program

Room 1 Advances in sensing, vision, mapping, positioning and in-the-loop testing technologies
9am - 4.55pm


Dr Blake Lane
Senior Scientist, advanced power and energy program
University of California


Accelerate and scale autonomous driving validation

Mitra Sinha
Autonomous Development Lead
Autonomous driving requires HIL/SIL testing and verification for safety validation with massive data sets. The resulting computational intensity is in the 10s to 10,000s of cores across CPU, GPU and other processor technologies. The ability to cost-effectively scale and easily access the latest technology innovations while ensuring the high fidelity requirements for HIL/SIL validation is critical for accelerating both the validation cycle and in-the-loop testing. In this session we will explore how the cloud can provide these capabilities to globally scale autonomous driving verification and validation for lower cost, higher efficiencies and accelerated time-to-market.


Driving sensing and perception beyond today’s capabilities

Dr Maha Achour
CEO, CTO, Founder
Metawave Corporation
With cameras driving today’s ADAS software stacks utilizing computational imaging and integration, there are still gaps in meeting top safety levels at full driving speeds and in unpredictable operating and weather conditions. Today’s radars cannot meet long-range detection and tracking requirements while covering wide fields of view at faster refresh rates and providing reliable high imaging resolutions, along with ensuring enough power on the receiver’s side. Metawave is redefining next-gen radar systems, using ‘modular software-defined hybrid radar architecture’ by combining the best of phased-array analog beam-steering and digital MIMO technologies, resulting in consistent performance compared with existing technologies.


Precise localization and mapping with automotive-grade lidars

Jagdish Govind Bhanushali
Senior Software Deep Learning Engineer
The presentation will introduce the technology used by Valeo to perform precise localization using automotive-grade lidars, vehicle odometry and standard GPS sensor (10m precision) – first by using real-time motion classification of lidar point detection and then locating and updating in a point cloud map.

10.15am - 10.45am



The key to unlocking today’s autonomous vehicles depends on reliable ADAS technologies that customers can trust

Tarik Bolat
CEO and Co-founder
The presentation will explain what autonomous vehicles will need to achieve in order to reach mass adoption quickly and safely, such as through additions to the vehicle sensor stack that increase precision and reliability. It will describe how additional ADAS capabilities such as Ground Positioning Radar (GPR) are safeguarding automated driving by overcoming common hurdles faced by AVs, such as snow- and rain-covered roads or within areas with poor GPS coverage or lane markings. Tarik will also offer his predictions for where he expects fully autonomous vehicles to scale first, and why he believes today’s consumer vehicles are poised to leverage traditional AV technologies for a radically different and safer driving experience. Finally, he will offer valuable insights from working with auto makers and Tier 1 partners to help vehicles safely navigate where current ADAS technologies fall short for wide-scale adoption.


Robust mapping solutions for autonomous systems

Divya Agarwal
Robotics Engineer
Lidar based maps suffer from dynamic objects in the scene, during data collection. Dynamic objects decrease the quality of maps and affect localization accuracy. In this session, we will look at mapping solutions used by Autonomous robots and cars. And discuss a simple technique that can provide better long term mapping solutions for Autonomous systems.


Integrity – taking localization beyond accuracy

Alistair Adams
Senior Automotive Product Manager
Swift Navigation
Localization is a vehicle’s ability to identify where it is in the world. For autonomous vehicles, accurately and quickly locating themselves in their environment is critical. The autonomous sensor suite consists of many sensors – including but not limited to optical, ranging and inertial – providing relative position. High-accuracy GNSS is the only absolute position sensor to provide the accuracy and confidence that autonomy requires. Sensors are only as good as the confidence in the output. This session will look at how integrating integrity into the autonomous sensor suite improves overall system safety.


LiDAR Design for Manufacturing

Dr Zoran Jandric
Engineering Director
Seagate Technology
Design for Manufacturing Considerations for Ubiquitous Deployment of Low-Cost LiDAR It has become widely accepted that LiDAR sensors will be an indispensable part of a sensor suite that will enable vehicular autonomy in the future. However, sensor costs remain very high and prevent the ubiquitous adoption of LiDAR sensors. Bringing knowledge and expertise in Cost Engineering and Design for Manufacturing from the HDD space into the LiDAR space can accelerate the largescale deployment of LiDAR sensors. In this talk, some of the key manufacturing technologies will be highlighted.

12.25pm - 1.55pm



Sankalp Dayal
Sr. Applied Scientist
Amazon Lab126


Lessons learned from 10+ million miles of autonomous driving

Marcus McCarthy
Director - On-Road Autonomy
Precise and safe absolute positioning is critical to many autonomous, ADAS and V2X solutions. With more than 20 years of experience developing precise positioning solutions, Trimble experts will discuss how the company’s technology is enabling innovators in autonomous transportation, including General Motors. They will share their experience from more than 10 million miles logged on the road using Trimble RTX technology to maintain in-lane position, and will offer valuable insights into how positioning and orientation technology support customers. They will also discuss how Trimble is reducing buildout time and costs with reliable POSE (positioning and orientation estimation) using the company’s POS-LV system for AV/ADAS systems.


Innovative 4D AI, addressing and solving perception corner cases

Jacopo Alaimo
NA Sales and Business Development Manager
Artificial intelligence has brought image processing for object detection and classification to a higher level. Good quality results of AI-based object detection with camera data are easy to find these days. Still, AI detection results are only as good as the quality of the images and the situations that are used to train the network. These are some of the main reasons why the journey to Level 5 self-driving cars is still long. There are still too many (corner) cases and too many difficult circumstances where current camera and radar technology does not bring the required detection reliability. This presentation shows a new way of training NN on pure point cloud data, resolving some corner cases while demonstrating the value of doing so.


Reduce ADAS/ADS development time with in-lab sensor fusion XIL tests

Dr Jeff Buterbaugh
Business Development Manager
Konrad Technologies
With rising development costs and extended testing requirements, ADAS and ADS deployment and adoption efforts continue to face time pressures. Sensor fusion tests enable software AI and hardware components to be tested together to verify and validate ADAS/ADS functional performance. This session will highlight how Konrad Technologies is extending sensor test and sensor fusion test capability across the ADAS/ADS development process from model-in-the-loop (MIL) to vehicle-in-the-loop (VIL) tests to realize sensor fusion XIL test (where XIL=MIL, SIL, HIL, DIL, VIL). We will also share some key lessons from our ongoing sensor fusion test projects.

3.10pm - 3.40pm



The gap between lab and road testing for radar sensing systems

Aaron Newman
Business Development Manager
Keysight Technologies
There are numerous examples of ADAS failing in production vehicles in real road situations, whether due to challenging weather conditions, incomplete scenario testing across the n-dimensional set of variables, or even unimagined scenarios. Model in the loop (MIL) and software in the loop (SIL) fulfill important parts of the design cycle, but when systems reach the hardware in the loop (HIL) stage, realistic stimulation of radar sensors in 3D dynamic environments is non-existent. Or has been. We will present a new approach providing a realistic reflection environment, including things like ground clutter, guardrail and overpass returns, and targets behind targets, with independent motion and reflection characteristics. Such a system made traceable to NIST will provide an environment in which every variable can be iterated, even down to weather. Dangerous scenarios can be tested with high confidence. Every new release of code can be regression tested against critical corner scenarios. This new paradigm will bring safer vehicles to market more quickly.


HIL simulations reach a higher dimension of realism

Gordan Galic
Technical Marketing Director
Matt Daley
Operations Director
Xylon and rFpro teamed up to explore methods of connecting hardware ECUs to simulation software. In this presentation we will describe how we immerse a real-world Surround View parking assistance ECU into a simulated world seen by four fully modeled virtual HD video cameras placed on the vehicle model driving along virtual roads. This ADAS ECU was selected based on its highly visual nature and required transfers of large amounts of video data. We will outline the challenges and solutions for quick and precise translations between physical and virtual domains that enable this full-speed and high-bandwidth HIL simulation setup.


Addressing Pedestrian Safety with Monocular 3D Thermal Ranging

Chuck Gershman
CEO and Co-founder
Owl Autonomous Imaging (Owl AI)
The current de-facto ADAS sensor suite typically comprises mutually dependent visible-light cameras and radar, but when one of these sensors becomes ineffective, so too does the sensor suite. This scenario happens often especially when it comes to pedestrians, cyclists, and animals at night or inclement weather. Owl will discuss a new modality known as monocular 3D thermal ranging that dramatically improves pedestrian safety. The system is based on speciality HD thermal imaging and innovative computer vision algorithms. Operating in the thermal spectrum these algorithms exploit angular, temporal and intensity data to produce ultra-dense point clouds and highly refined classification.