Off-loading Onboard Computing in Driverless Vehicles: Role of HD Maps, 5G, and the (Edge) Cloud

Subscribe To Download This Insight

By Dominique Bonte | 4Q 2018 | IN-5274

The consensus about how to perform advanced sensor fusion based on deep-learning computing in autonomous and driverless vehicles converges around heavy lifting Graphics Processing Unit– (GPU-)based central embedded processing units that support tens of Teraflops (One Trillion Floating Point Operations per Second, or TFLOPs). This computing power allows the running of multiple deep-learning inferencing applications, including machine vision, High Definition (HD) map-based positioning (e.g., Simultaneous Localization and Mapping, or SLAM), and driver or passenger monitoring. But this inflation of onboard computing comes at price—not only in terms of the cost of the compute module but also in terms of the power budget impacting cooling needs and Electronic Vehicle (EV) range.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Deep Learning and Sensor Fusion Driving Onboard Compute Requirements 

NEWS


The consensus about how to perform advanced sensor fusion based on deep-learning computing in autonomous and driverless vehicles converges around heavy lifting Graphics Processing Unit– (GPU-)based central embedded processing units that support tens of Teraflops (One Trillion Floating Point Operations per Second, or TFLOPs). This computing power allows the running of multiple deep-learning inferencing applications, including machine vision, High Definition (HD) map-based positioning (e.g., Simultaneous Localization and Mapping, or SLAM), and driver or passenger monitoring. But this inflation of onboard computing comes at price—not only in terms of the cost of the compute module but also in terms of the power budget impacting cooling needs and Electronic Vehicle (EV) range.

This is prompting multiple hardware, software, and mapping startups to develop light- or lighter-weight solutions. French semiconductor vendor Kalray recently announced its Massively Parallel Processor Array (MPPA) architecture that can run the Baidu Apollo autonomous vehicle open software platform stack that was demonstrated recently at the AutoSens conference. Kalray’s 288 core MPPA processor offers computing power levels comparable to existing solutions at much lower power consumption levels. On-chipset memory allows running-time-critical heavy-computing functions on the same chipset.

On the Fly Computing versus Reliance on High Accuracy Map Content

IMPACT


Mapping is an unexpected ally in this quest for embedded low-power solutions. While HD maps are necessary for accurately positioning the vehicle within its environment and within its lane, live map attributes like speed information and lane markings can be leveraged for off-loading some of the central computing requirements. This essentially turns maps into sensors (Map as a Sensor), a paradigm pioneered by map vendor HERE. Maps containing built-in maximum speed information avoid having to read or interpret speed signs on the fly—not only removing some of the burden from the central machine vision/sensor fusion compute platform but also improving reliability (many speed signs are hard to read, can be covered with snow, or display fading information), assuming maps are updated frequently (via live maps). This would still require vehicle sensors to check existing speed signs to keep maps up to date but only as a secondary, nonmission critical task (e.g., self-healing map principle based on closed-loop vehicle sensor data crowdsourcing).

Obviously, maps will never replace real-time sensor fusion computing completely—they can only play a role in documenting (semi-)static objects—but they can be used to a lesser or larger extent to assist the machine vision platform by removing some of the burden on real-time perception. However, this does require map generation processes to be compliant with functional safety standards (specifically, International Standards Organization [ISO] 26262, “Road Vehicles—Functional Safety”) as map content can directly impact autonomous driving decisions.

However, in order to maximize reliability, it might be a very good idea—especially in the first years of driverless vehicle deployment—to build as much redundancy as possible into Level 4 and 5 vehicles by matching and comparing map content with on-the-fly sensor data analysis and machine vision—a lesson the aviation industry learned a long time ago.

On a different level, vendors like CivilMaps are developing lightweight HD maps based on voxel-based fingerprints of the environment, allowing reducing the size of the maps from gigabytes (GBs) to hundreds of kilobytes (KBs) per kilometer. It offers multiple benefits in terms of crowdsourcing sensor data, updating maps over the air, and lower computing and power requirements. Also, the deep-learning software stack can run on more affordable ARM architectures.

The Elephant in the Room: Will Low Latency 5G Connectivity Shift Vehicle Computing to the Edge?

RECOMMENDATIONS


Conventional wisdom has it that car Original Equipment Manufacturers (OEMs) will never rely on cloud computing for safety-critical computing operations. However, 5G might challenge this premise through its promise of low latency connectivity for both short range (e.g., Device-to-Device [D2D] sidelink as part of Cellular Vehicle to Everything [C-V2X]) and long-range applications (e.g., end-to-end network slicing). This opens up the potential of low latency cloud services for any type of mission-critical use case based on edge cloud computing capabilities integrated within the telco network close to the vehicle (for example, at the base station), allowing for tight control of quality of service and overall reliability. This is referred to as Mobile Edge Computing (MEC). At the very least, this represents an interesting possibility to consider. Example use cases include the communication of road hazards relating to extreme weather conditions and accidents beyond the range of V2X technology. If car OEMs accept using cellular technology in the form of C-V2X sidelink capabilities, the logical next step would be to leverage cellular technology further upstream in the telco network, as long as all parameters remain under the direct control of the managed 5G network. For the telco industry (i.e., both carriers and infrastructure vendors), this represents another opportunity to elevate themselves above their native role as connectivity providers by challenging established cloud vendors like Amazon and Microsoft, who are also aggressively targeting the automotive opportunity. The edge cloud, safely embedded and integrated within the telco network, seems out of reach for over-the-top cloud providers that are unlikely to venture into local last-mile infrastructure. While the edge cloud might not be ready to take over embedded machine vision yet, it could become the ideal platform for vehicle lifecycle management and closed-loop sensor data crowdsourcing, for which the local scope of the edge cloud is a good match. Map and other data updates only need to be shared with vehicles in a relatively small geofenced area, at least as far as their mission-critical relevance is concerned in terms of warning nearby vehicles of any changes in the environment or the traffic. It would yet again extend the visibility range of vehicles and improve overall reliability.

The edge cloud certainly provides carriers with a much better weapon to fight the public cloud giants than through direct confrontation with telco cloud initiatives. The edge cloud actually allows extending cloud services to new types of more critical applications, which so far were only running on embedded systems. The real challenge will be to identify the type of automotive applications car OEMs are willing to off-load to the edge cloud. Undoubtedly, this is a topic that will be debated at conferences for many years to come.