Chipset Vendors in the Race to Develop AI-Enabled Chipsets for Robotics

Subscribe To Download This Insight

2Q 2019 | IN-5443

As mobile and commercial robots begin to see significant growth, chipset vendors in this space are adding features and accelerating activity. While innovation in the market is increasing, hardware and compute architecture need to complement one another from the start.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.


Qualcomm, NVIDIA, and Intel Are Building up Their Robotic Strategies


Chipset vendors are frantically racing to develop robot-specific hardware, driven by the unique requirements of autonomous machines, such as multi-robot control, Simultaneous Location and Mapping (SLAM), and improved power efficiency. This, in turn, is being driven by an acceleration of mobile robotics deployments both in and out of their main markets of manufacturing and logistics. Qualcomm’s recent announcement of its Robotics RB3 platform comes at a critical time of accelerating activity among chipset and compute vendors who want to take advantage of the voluminous growth in mobile and commercially viable robots.

Qualcomm already has a well-padded development strategy when it comes to robotics and has partners with some of the most well-known robotics companies; including Intuition Robotics and Anki’s Cosmo and Vector robotic toys. However, the Robotics RB3 is far more targeted. The Robotics RB3 platform is based on Qualcomm’s SDA/SDM845 System-on-a-Chip (SoC), integrating high-performance heterogeneous computing, 4G/Long-Term Evolution (LTE) connectivity, and a Qualcomm Artificial Intelligence (AI) Engine for on-device machine learning and computing vision. Additional features include high-fidelity sensor processing for perception, odometry for localization, mapping, and navigation, security features, and Wi-Fi connectivity. Qualcomm said it will also introduce 5G connectivity support for the platform later this year, enabling low latency and high throughput for industrial robotics applications.

Meanwhile, NVIDIA, Qualcomm’s biggest rival in this space, is building its strategy on its AGX AI products. The future of autonomous driving and mobile robotics will mean deploying advanced neural networks on a range of sensors to localize, map out, perceive, and navigate in dynamic environments. NVIDIA wants its AGX computer architectures to provide this capability to robots, autonomous vehicles, and other platforms. As AI edge capability increasingly finds its way to mass market, NVIDIA will have to tailor its Jetson to smaller platforms and provide mass-market appeal to consumers. Ironically, as Qualcomm moves to target the commercial and industrial robotics enterprise, NVIDIA is doing the opposite, extending its product line to encompass small developers and consumer robots, as well as the automotive giants and major robotics companies on which it has recently focused.

The newly unveiled Jetson Nano is the latest in NVIDIA’s product line tailored for robotics, following the TX1, TX2, and Xavier. While the Xavier cost up to US$1,000 per unit, and the latest TX 2 costs US$299, the Jetson Nano developer kit is priced at US$99, with the production variant costing US$129. The same Jetpack software runs on the Nano and the device can demonstrate processing 8 video channels at 1080 resolution simultaneously, while using 8 neural networks.

Nano is low power compared to its predecessors, but still comes equipped with 4 Gigabytes (GB) of memory and is intended for processing data from high-precision cameras. The Nano is about 30% smaller in form factor compared to the TX 2, and is configured with 4 Cortex A57 ARM cores running on 1.43 GHz. The Nano developer kit is compatible with ROS and can be augmented with a Jetbot (US$100 robot) that can be programmed by hobbyists, developers, and programmers for early market activity. In part, this move is to provide competition to Google Choral and Intel Unsquared as an affordable AI development kit, but also represents a desire to expand the breadth of NVIDIA’s robotics partnership base.

Meanwhile, Intel has built an extensive portfolio of products that are applicable to robotics, including Mobileye (autonomous vehicles), Nervana, Altera, and Movidius. Movidius, in particular, is a force multiplier for Intel in the robotics space, as it is partnered with DJI, the dominant entity in the consumer and commercial drone industry.

Intel recently unveiled a new experimental SoC that is being targeted for multiple robots. The device runs on an efficient 37 milliwatts and has 2 accelerators—one for path planning and the other for motion control. These currently represent two of the key challenges for mobile manipulation. Path planning is hard primarily for being compute-intensive, and so it generally has to be offloaded to the cloud. Latencies incurred by this make mass deployments of robots in mission-critical environments difficult, but with Intel’s new offering, it will become more feasible. The SoC is not a panacea to the challenges faced by robotic swarms. The motion control and path panning competencies of this chip do not apply to drones, because three dimensions are much harder to navigate than two.

Experimental Compute Architectures Are Proliferating


While Intel, NVIDIA, and Qualcomm are the major entities battling it out in this space, they are in no way monopolizing innovation. Universities and startups are developing some of the most promising architectures to accelerate robotics development. More specialized Application-Specific Integrated Circuits (ASICs) are still being innovated to improve navigation and machine vision capability. The University of Minnesota recently unveiled a Wave computer focused on path planning. It uses weird logic and time-based computing. While it does not fix the challenges of Three-Dimensional (3D) path planning for drones, it claims to be 1 million times as energy efficient as the Central Processing Units (CPUs)/Graphics Processing Units (GPUs) provided by the main vendors.

The SLAMmer, created by the University of Michigan, is a SLAM chipset using cameras for navigation, otherwise known as visual SLAM (vSLAM). vSLAM is, in many ways, a superior sensor solution compared to the use of Light Detection and Ranging (Lidar) or other arrays. It is more detailed and, thus, is superior as a solution for indoor applications, and the up and coming time-of-flight cameras are fixing some of the problems of light and environment on stereo camera solutions. However, vSLAM is also incredibly compute-intensive, with about 60 Frames per Second (fps) necessary for a quality image, meaning about 1 gigabit of data per second. The university team’s SLAMmer was able to ingest video data at 80 fps for a mere 240 watts. The purpose-built solution has three main parts: a convolutional neural network (optimized for picking out environmental features), an accelerator, and a solution to refine trajectory of the robot over frames.

On an even more experimental level, Georgia Tech has built an ultra-low power ASIC designed for palm-sized robots to develop machine learning capability. Combined with new generations of low-power motors and sensors, the new ASIC, which operates on milliwatts of power, could help intelligent swarm robots operate for hours instead of minutes. To conserve power, the chips use a hybrid digital-analog time-domain processor in which the pulse-width of signals encodes information. The neural network IC accommodates both model-based programming and collaborative reinforcement learning, potentially providing the small robots with greater capabilities for reconnaissance, search-and-rescue, and other missions.

Hardware Innovations Must Complement Compute Architecture Innovations


ABI Research does not want to minimize the innovation going on with GPUs. Robotics hit a milestone in 2005, when the Defense Advanced Research Projects Agency (DARPA) grand challenge for autonomous driving was won by Stanford University for its use of GPUs. The key benefit is that GPUs crunch through huge troves of data very quickly, and following this demonstration, the key GPU vendors NVIDIA and AMD accelerated their efforts to deploy their hardware in autonomous vehicle sand robots. GPUs are excellent at creating high-resolution depth, thus, they are well placed for mission-critical applications of robots, such as in outdoor environments.

Both ASICs and GPUs bring different strengths to the robotics environment. While ASICs target specific use cases, similar to how Intel Movidius improves machine vision, GPUs are superior for training and general inference tasks, an area in which NVIDIA is king.

Moving forward, advances in chipsets are key to the future of robotics as they provide a platform for advanced AI applications and massive data analytics. The proliferation of robotics across multiple use cases is creating a market for robotics that demands compute architectures that enable new use cases and intersections with other enabling technologies, including:

  • Machine vision for vSLAM
  • Human-Machine Interface (HMI)
  • Advanced manipulation
  • Natural Language Processing (NLP)
  • Augmented Reality (AR) and Virtual Reality (VR)

In a nutshell, hardware innovations that allow for superior robotics design need to complement any compute innovations. It is also vital that chipset providers work from the ground up with hardware engineers to develop chipsets that will enable robots to far exceed the base limitations of currently available products.