The Edge AI Inference Market Hots Up as NVIDIA, Qualcomm, and Google Launch New Products

Subscribe To Download This Insight

2Q 2019 | IN-5482

Several companies have launched chips to address Artificial Intelligence (AI) inference at the edge on a device. These product launches suggest two distinct trends that the industry is currently going through.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

The Year 2019 Sees the Launch of Several Chip Platforms for Inference

NEWS


At its annual GTC conference, Nvidia announced that it was launching a new version of its flagship robotics Jetson board called Nano. Nvidia Jetson Nano debuted at US$99 for the dev kit, and US$129 for the production-ready module. Jetson Nano is also smaller and has a lower power consumption than the TX2. Jetson Nano can perform 472 GFLOP, whereas TX2 has a potential 1TFLOP, so there is some loss of performance. Crucially, Jetson Nano is compatible with Nvidia’s robotics development software JetPack, and supports NVIDIA CUDA Toolkit 10.0, and libraries such as cuDNN 7.3 and TensorRT 5. The Software Development Kit (SDK) also includes the ability to natively install popular open-source Machine Learning (ML) frameworks such as TensorFlow, PyTorch, Caffe, Keras, and MXNet, along with frameworks for computer vision and robotics development like OpenCV and ROS.

The announcement of Jetson Nano has also come in close proximity to Qualcomm launching a robotics system based on its 845 Snapdragon chip called RB3. Both Jetson Nano and Qualcomm’s RB3 have been designed with robotics applications in mind. Qualcomm is selling RB3 developer kits that include a custom camera system and cost US$449. Qualcomm’s RB3 will give developers access to Qualcomm’s Artificial Intelligence (AI) engine, but lacks the equivalent software supported by Nano like JetPack. Qualcomm’s AI Engine will allow developers to deploy AI models across the heterogeneous chip architecture found on the 845, which includes DSP, GPU, and CPU, but will focus most of its deep learning processing on its DSP. Qualcomm will need to build or support more software tools and capabilities to encourage developers to use RB3.

The other significant edge AI platform launched this year so far came from Google. At CES 2019, Google showed off its Edge TPU chipset platform for the first time, before launching its first edge AI board Coral, the developer kit of which costs US$149. Coral includes an Edge TPU and utilizes the same software, meaning that models need to be trained in TensorFlow Lite rather than TensorFlow. This table details the power consumption and form factors of the chips discussed above.

  Chip-Vendors-Form-Factor-Normal-Power-Consumption-Charts  

The Big Opportunity Is Getting Smaller

IMPACT


Two elements are striking about these announcements collectively. First, these announcements indicate that the industry is seeing a significant opportunity in providing more processing power to the edge. Several promising business applications such as vehicle and robotics automation, surveillance monitoring, and predictive maintenance are driving compute power shift to the edge so that these applications can avoid running in the cloud. A combination of connectivity costs, latency issues, data privacy, and security concerns are making some applications unworkable and causing companies building AI-related applications to search for hardware solutions that can allow AI processing to take place at the edge on the device. ABI Research has forecast that shipments of devices with dedicated edge AI processing hardware will grow to 234 million devices by 2023.

Second, the announcements are a clear indication of the direction of scale in which edge AI platforms are heading. Increasingly, major chipset and AI companies like Nvidia, Qualcomm, and Google are focusing on delivering AI inference capabilities in increasingly small and efficient packages in terms of form factor, cost, and energy consumption. Instead of expanding the capabilities of Jetson, adding more processing power and memory as the company had done in previous iterations, Nvidia chose instead to focus on making a Jetson device that cost less and had lower power consumption. This is an indication that Nvidia sees demand from its existing customer base to scale down these aspects of Jetson without losing any of the functionality. In the past, Jetson customers privately reported to ABI Research that they would like to see the Jetson platform scaled down. One reported that the Jetson TX series was very power-hungry considering the compute processing they were utilizing on the platform, and that they would very interested if a more power-optimized system came to market. Nvidia Jetson Nano’s normal power consumption is around 5 watts, 2.5 watts less than the Jetson TX2.

A good indicator of this trend to scale down platforms is evident from Google’s Coral platform. Instead of trying to emulate Jetson TX2, Coral is far more similar to Jetson Nano in design, suggesting that Google and NXP also see the same opportunity to provide the industry with a lower power and cost edge AI platform. Coral is like Jetson Nano in that it boils down to an AI accelerator (GPU in the case of Jetson, ASIC in the case of Coral) sitting on top of an ARM-based quad-core processor. Although Coral has 8 GB of storage compared to Jetson Nano’s 4 GB, the devices are very similar in terms of price. There are two trends driving companies to deliver AI inference accelerator platforms at a smaller scale:

  • Software Tools for Neural Network Pruning: At a technical level, software tools that allow developers to shrink the size of their AI models have improved dramatically. The effect of these improved software tools is that the size of the deep learning models used in inference is shrinking, meaning they require less computational power to be implemented at the edge. Lower compute demands of AI models means that silicon companies can bring platforms to market that have less computational power and still meet industry requirements for performance.
  • Market Demand: Previous edge AI inference systems like the TX2 were physically too big for many applications, as well as being too expensive, making the implementation cost for many product designers and nearly all hobbyists to high.

Who Is Going to Win?

RECOMMENDATIONS


Nvidiahas had a first mover advantage and is leading the AI inference market with its Jetson TX1 and TX2 platforms, and now adding Nano gives it the most comprehensive portfolio of edge AI systems of any silicon vendor. Nvidia also leads the competition in terms of the software tools and compatibility with multiple open-source AI frameworks. Although Nvidia has a lead in both of these respects, there is no guarantee that this can be sustained, particularly given that there is no inherent advantage of utilizing the GPU architecture over other accelerator approaches. Now other companies have recognized the potential opportunity of edge AI, and are offering solutions across a range of architectures, spanning GPU, ASIC, FPGA, and heterogeneous architectures. Here are the significant players bringing platforms to market for device edge AI inference by architecture. Note that this is not an exhaustive list, and there are many companies planning on bringing AI inference products to market; they just haven’t explicitly revealed whether they will be addressing the device edge AI inference market.

  Chip-Vendors-Architecture-Details-Availability-Chart  

It is worth noting that there are other established companies and startups developing technologies that address edge AI inference, but many remain in IP or SoC form like CEVA, ARM, Kneron, and Cambricon, and need to partner with existing chipset manufacturers, like Cambricon with Huawei. In addition, it is unlikely that one company like Nvidia is going to take the lion’s share of the edge AI inference market as it matures. Different edge AI applications are going to have varying compute requirements, which will be difficult for one company to exclusively serve, particularly given that most companies like Nvidiatend to be wedded to a processor architecture. What could accelerate the uptake of different platforms is the supporting software tools that each platform is compatible with, especially if they are exclusive to that vendor. For instance, Intel’s OpenVINO, a popular software toolkit for developing machine vision models, is driving many developers towards using Intel’s Movidius platform, one of the only edge hardware platforms that supports OpenVINO at the edge.