Discover more about the topics and technologies to be discussed at this year's (fee-to-attend) conferences, via a series of exclusive interviews with a selection of our expert speakers
Dr Jason Eichenholz, co-founder and CTO of Luminar, discusses why lidar is still the go-to vision system for AVs despite advances in camera technology, and how to reduce the cost of lidar sensors without compromising on system performance.
Catch Jason’s presentation Building the vision for autonomous mobility at the Autonomous Vehicle Test & Development Symposium . Purchase your delegate pass here.
Tell us more about your presentation.
My talk will focus on describing and quantifying all the requirements simultaneously required in order to achieve full autonomy. Luminar has defined 14 requirements necessary for full autonomy and I will go into more depth about what those consist of and why they matter. The talk will also address the growing chasm between the current state-of-the-art, what’s required for production-ready level 3 autonomy, and what that means for true driver-out-of-the-loop autonomy.
What have been the latest breakthroughs in lidar technology?
For the first time ever, we have a system which can simultaneously deliver camera-like resolution with radar-like range. Luminar’s lidar sensors have extremely high pixel density, giving the ability to capture millions of points per second, with each point able to detect low-reflectivity objects at more than 250 meters’ distance.
Why lidar? Radar and camera systems are sometimes touted as sufficient for AV safety, while also being cheaper, so why use lidar?
Lidar is the primary sensor for all serious AV programs out there and you can see even the most advanced programs heavily rely on lidar to be able to navigate the world and perform object recognition, detection, segmentation, etc. It’s absolutely critical to have lidar to help solve the last 1% of edge cases that can happen in the autonomous driving scenario.
Despite a lot of efforts to reduce the costs of lidar, there have been no performance advancements. Today, the most widely deployed systems can only see objects that are 10% reflective at 30 meters, which at freeway speeds equates to a fraction of a second of reaction time. There also needs to be enough resolution to actually accurately make out whether there are objects out there and what they are.
To resolve these issues Luminar has designed for a completely unprecedented level of performance in terms of both range and resolution. At the end of the day, the cost of a sensor is moot if it can’t get a passenger from point A to point B safely every single time. That said, Luminar’s sensor architecture is incredibly efficient to build, both with regards to cost and time. To achieve this, we’ve built all of our own components: lasers, receivers, scanning mechanisms and processing electronics. Rather than buy silicon chips off the shelf, Luminar has developed its own highly sensitive InGaAs chip by using a fraction of an InGaAs wafer, keeping costs down and performance up. Pursuing this architecture from the beginning means it was designed from the chip level up to be scalable into consumer vehicles, both from a manufacturability and a cost standpoint.
What challenges are there still for lidar?
Luminar is driving the size, cost and power demands out of the system. There is an ongoing conversation in the AV space about cost and a resulting movement to reduce the cost of lidar systems down from the tens of thousands of dollars they stand at today. The biggest problem with this evolution is that companies are achieving the price reduction by compromising on the performance of the device. This is counter to Luminar’s goal of making self-driving cars truly safe and ubiquitous.