Self-Driving Startup Aurora Acquires LiDAR Company Blackmore as the Argument about LiDAR Relevancy Heats Up

Subscribe To Download This Insight

2Q 2019 | IN-5519

Self-driving startup Aurora announced last month that it would be acquiring Light Detection and Ranging (LiDAR) developer Blackmore for an undisclosed fee. Aurora is best known for its leadership team, which consists of Chris Urmson, who previously led the driverless-car project at Google; Sterling Anderson, who ran Tesla’s autopilot team; and Drew Bagnell, who helped form Uber’s Advanced Technology Group. Although Aurora’s work in autonomy has been masked in secrecy, its last funding round of US$530 million attracted investment from Amazon.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Aurora Acquires LiDAR Company Blackstone

NEWS


Self-driving startup Aurora announced last month that it would be acquiring Light Detection and Ranging (LiDAR) developer Blackmore for an undisclosed fee. Aurora is best known for its leadership team, which consists of Chris Urmson, who previously led the driverless-car project at Google; Sterling Anderson, who ran Tesla’s autopilot team; and Drew Bagnell, who helped form Uber’s Advanced Technology Group. Although Aurora’s work in autonomy has been masked in secrecy, its last funding round of US$530 million attracted investment from Amazon. 

Blackmore, meanwhile, has attracted investments from BMW and Toyota, highlighting its leading role in LiDAR development. The company differentiates itself from other vendors in the space through its deep ties to the optical telecom industry. The company uses Frequency-Modulated Continuous Wave (FMCW) LiDAR as opposed to the traditional Time-of-Flight (ToF) technologies used by most technology providers. The advantage of this technology is the addition of more robust sensing as well as the ability to interpret the velocities of objects directly with limited computation.

The acquisition of another LiDAR startup following acquisitions and significant investments from tier ones and Original Equipment Manufacturers (OEMs) shows further backing for LiDAR technology. However, LiDAR’s most famous critic, Tesla CEO Elon Musk, remains vocal about its use and, more recently, Nissan announced its backing of a camera-only approach for semi-autonomous applications. So, despite the increasing investments in in LiDAR, support for camera-only approaches seems to be a message that is growing in volume throughout the industry.

LiDAR: Important Piece of the Puzzle or Unnecessary Cost?

IMPACT


When the race toward driverless vehicles exploded back in the early 2000s, every major vendor in the space was touting the need for LiDAR, and nearly every robotaxi prototype was fitted with several LIDAR units. The image of a large mechanical spinning LiDAR unit fitted on the roof of a vehicle soon became synonymous with driverless vehicles.

However, as time went on, frustrated by the slowing progress of driverless vehicle there were quiet rumblings from those in industry that LiDAR is not needed, with some even arguing that it was hindering driverless development. The most outspoken critic was Elon Musk, who has been criticizing LiDAR technology since 2007, describing it as “crutch” in early 2008. More recently, another major OEM, Nissan, joined the bandwagon of LiDAR bashing, announcing its camera-only approach to vehicle autonomy in May 2019.

This, however, is not the general consensus of the wider industry. GM Cruise AV and Waymo, which many commentators consider the global leaders in driverless vehicle development, have both been strong proponents of the technology. Waymo has even moved to resell its proprietary LiDAR technology to help increase scale and reduce costs. Outside of the robotaxi market, Audi has used LiDAR in the in the first commercial Society of Automotive Engineers’ (SAE) Level 3 semi-autonomous driverless system available on its Audi A8 models, in Germany, and in 2018 BMW announced a key partnership with Innoviz for Solid-State LiDAR for its SAE Level 3 systems, which will likely arrive in 2021.

LiDAR critics are claiming that advancements in sensing capabilities, computing technology and software can be used to build the important 3D perception maps and enable cameras to perform better in poor lighting conditions, eliminating the need for LiDAR, which they deem to be costly and an imperfect technology. However, these claims have yet to be proven in a solid commercial application. Tesla, the strongest critic of LiDAR technology has yet to develop anything beyond SAE Level 2, with Autopilot being a SAE Level 2+ application.

It is clear that camera-only approaches are sparking growing interest in the industry. However, aside from Tesla no serious players have fully committed to that approach for higher level autonomy. It would seem, therefore, that for the time being LiDAR is still ahead. But how can it stay ahead, and what’s next?

Vision-First Approaches Emerge as Frontrunners

RECOMMENDATIONS


While the jury has yet to reach a verdict on camera-only approaches, it is clear that there has been a fundamental shift in the way LiDAR is being used in autonomous vehicle applications. Where LiDAR used to be the primary sensor in early robotaxi applications, it has slowly been downgraded to perform a secondary role in sensing stacks, playing second fiddle to cameras.

The go-to approach by a growing number of vendors in the autonomous vehicle space has been to produce “vision-first” solutions, with LiDAR providing a secondary role to cameras, enabling the detection of objects in poor lighting conditions and the all-important 3D perception map. In this sense LiDAR has been confined to a radar-like role in Advanced Driver-Assistance Systems (ADAS), complementing and helping fill gaps in data provided by the the camera.

With the growing interest in camera-only approaches, LiDAR vendors will need to ensure that they are correctly positioned in order to meet the likely requirements from OEMs. For secondary sensors in consumer vehicle applications, this is likely to come down to costs and providing complementary functionality to cameras. In terms of cost, LiDAR vendors will have to ensure that they can reach US$150 per unit in the next few years to reach commercialization in the traditional automotive market. The use of key technologies such as VCSEL arrays for low cost illumination, as provided by AMS and TriLumina, along with development of MEMs technology, as being pioneered by Innoviz and LeddarTech, coupled with increasing scale, should help vendors realistically achieve this.

Meanwhile, in the robotaxi market, the days of using $50,000+ mechanical LiDAR sensors also seem to be numbered, given that the use case for LiDAR has shifted away from high resolution, potential camera replacements to lower resolution, complementary camera solutions. Furthermore, the constantly elongating timeline for robotaxi operations means those operating in this space have become more concerned with costs, where previously it was merely a race to market, irrespective of costs. Therefore, companies such as Velodyne that have long pitched high resolution as a key selling point may need to reconsider their strategies given that vendors are now using LiDAR as a secondary sensor and looking more closely at costs. Velodyne may find it hard to justify a cost of $10,000+ per sensor when the sensor is merely used for redundancy purposes.

Overall, the general industry consensus has not changed. For most, LiDAR forms an essential part of the sensing stack, complementing both cameras and radar in terms of range and resolution. However, LiDAR has a growing number of critics, and vendors need to be sure that they are correctly positioning LiDAR in terms of functionality and cost to ensure that it maintains its role in that sensor stack.

Services

Companies Mentioned