Editor’s Note: This interview has been edited and condensed for clarity. The answers by Keilaf represented here comprise a combination of direct quotes and paraphrasing. This article is part of Joanna Makris’ Fireside Chat series, where she provides retail investors with the scoop on the hottest technologies and trends from today’s business leaders, industry experts and money managers.
I had an opportunity to sit down and chat (via Zoom) with Omer David Keilaf, CEO and co-founder, Innoviz Technologies (NASDAQ:INVZ). Innoviz is on my buy list as an early leader in lidar (Light Detection and Ranging) technology. Prior to founding Innoviz, Keilaf was a senior officer in Unit 81, the elite technological unit of the Israel Defense Forces (IDF). In this role, he helped pioneer technologies to make the impossible possible.
After “MacGyver-ing” through several technological solutions for the IDF, he’s now solving yet another impossible technological challenge. Keilaf is working to make autonomous driving (AD) safe and affordable. By using lower-wavelength 905 nanometer technology, his company Innoviz is bending the laws of physics, while disrupting the cost-performance curve to help make AD mainstream.
Here’s what Keilaf had to say about the state of the industry, the hotly contested 1550 versus 905 nanometer debate, why Elon Musk is wrong about lidar and more.
How far away are we from seeing autonomous driving as a mainstream technology?
The lidar industry is divided into two different wavelength camps: 1550 and 905 nanometers. What’s the difference?
What do you say to critics who claim it’s impossible to deliver a high performance lidar at 905 nanometers?
Current semiconductors are transparent to any wavelength above 1000 nanometers, which means we can use silicon and low cost standard processors for our lidar. We also use a single-photon detector, which doesn’t need a lot of light in order to see reflection. So, even with lower laser power, we can still get a reflection from 200 meters away. In contrast, 1550 nanometer systems need a more powerful laser. They also can’t use standard silicon in the detector, which also makes them more expensive. Our performance comes across very clearly, our solution is ten times smaller, and much cheaper than 1550 solutions.
Innoviz won an OEM production contract with BMW for Level 3 lidar. What does that mean exactly?
[See the following table for more context:]
|Level 2||Hands off, eyes temporarily off||Partly automated|
|Level 3||Hands off, eyes off||Highly automated|
|Level 4||Hands off, mind off||Fully automated|
|Level 5||Passenger autonomous||Autonomous|
Level 3 is like the “MVP” of autonomous driving. People are already spending a lot of time on the highway, and feeling very passive about driving, which makes this a good application for the technology. A lot of car manufacturers are talking about Level 3 capability — but there’s a big difference between doing Level 3 at 20km versus 130 km per hour. The requirements of the sensor and reaction time are very different. Innoviz is the only certified automotive-grade high-performance lidar.
The BMW program is expected to launch by the end of 2021. This was the result of hundreds of millions of dollars spent, hundreds of top engineers and tens of millions of kilometers driven for validation. Only selected OEMs are able to go through such a long and rigorous process.
Where are we right now on the cost curve?
Elon Musk famously hates lidar, calling it a ‘crutch’ for autonomous vehicle makers. Thoughts?
For an autonomous car that mostly relies on cameras, like Tesla, low light conditions are a safety issue. A camera needs to detect that there’s an object in front of it, but it also needs context — it has to be able to classify that object to understand what it’s actually looking at. As a result, the AI required for a camera is very complex. It needs a lot more processing power. In contrast, it’s much easier for a lidar to give you a good understanding of the scene. The lidar is already collecting a lot of information, including the physical measurements of the object in front of it. The latency, or time to reaction, is faster and the processing power is more lean. It’s strange that someone with such vision would handicap a machine this way: Tesla is using a camera with a 2D sensor and trying to translate it into 3D.
What do you think about cryptocurrency?
Your comments and feedback are always welcome. Let’s continue the discussion. Email me at firstname.lastname@example.org.
On the date of publication, Joanna Makris did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.
Joanna Makris is a Market Analyst at InvestorPlace.com. A strategic thinker and fundamental public equity investor, Joanna leverages over 20 years of experience on Wall Street covering various segments of the Technology, Media, and Telecom sectors at several global investment banks, including Mizuho Securities and Canaccord Genuity.
Click here to track her top trades of the week, where she sheds light on market psychology and momentum, while leveraging her deep knowledge of fundamental analysis to deliver event-driven trading strategies.