The Launch of NVIDIA Jetson AGX Xavier Industrial Module Signifies the Expansion of Edge AI Use Cases

Subscribe To Download This Insight

By Lian Jye Su | 2Q 2021 | IN-6197

This insight breaks down the market drivers behind the launch of NVIDIA’s Jetson AGX Xavier Industrial Module.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.


Ruggedized GPU-Based Compute System for Extreme Environment


In June 2021, NVIDIA announced the launch of Jetson AGX Xavier Industrial Module. The ruggedized Artificial Intelligence (AI) compute system is designed for both stationary and mobile edge devices to function under extreme operational environments, including deep learning-based cameras, last mile delivery mobile robots, inspection drones, and automated storage and retrieval systems. NVIDIA has identified several sectors that they are targeting, including manufacturing, agriculture, construction, energy, and government. For this launch, NVIDIA is working with its usual partners, including data server Original Equipment Manufacturers (OEMs), industrial and robotics software developers, and device management and system integration partners.

Prior to the launch of the Jetson AGX Xavier Industrial Module, developers that need ruggedized GPU solution could only utilize Jetson TX2i Industrial Module. TX2 is still a more than capable ML compute platform for mainstream Machine Learning (ML) inference workloads, such as image recognition and anomaly detection. However, it is insufficient to support the ever-increasing compute requirements from more demanding applications. TX2 tops out at 1.3 Tera Floating Point Operations (TFLOPS), while AGX Xavier tops out at 11 TFLOPS. For inferencing, AGX Xavier can achieve 32 Tera Operations Per Second (TOPS) at 30 watts, demonstrating impressive efficiency. The extra processing power of AGX Xavier allows end users to support sensor fusion and path planning operations in mobile systems such as robots and drones, as well as real time video analytics. The launch of Xavier Industrial Module is a clear sign for the growing need for higher ML compute requirements at the edge.

Why Edge AI?


The industrial demand for higher compute at the edge is essential in building smarter and safer edge ML system. More and more enterprises are looking to automate their operations and augment their employees with ML-enabled devices, leading to the introduction of deep learning networks for real time video analytics, audio, speech and language processing, route planning, and federated learning for fleet operation. Unlike cloud-based AI systems, these business- and mission-critical systems require high speed and low latency communication and processing when working alongside human employees. They also must comply with specific data security and privacy requirements to prevent unauthorized access and control, as well as the misuse of enterprise and personal data.

Through its higher compute capabilities, the AGX Xavier Industrial Module is able to process the inference workload for larger ML models, without the constant reliance on cloud computing resource. On-device and on-premises ML processing allows devices and servers to only transfer metadata instead of raw data to the cloud for further processing, hence minimizing cybersecurity risks. Furthermore, edge ML allows enterprises to make sense of the mountain of data they collect from their assets, allowing them to make much better business decisions based on daily operation, usage trends, and customer behaviors.

NVIDIA's Unique Proposition in A Competitive Landscape


ABI Research estimates the edge AI chipset market is estimated to close in at US$30 billion by 2026. Undoubtedly this will be a market with strong competition. Aside from the usual suspects like Intel, HiSilicon, and Xilinx, public cloud vendors and several AI chipset startups, including Blaize, Hailo, Mythic, and Perceive, are also aiming at the high ML compute market segment. Most of these startups position themselves as better alternative in terms of raw compute power per watt, and will no doubt play a key role in the democratization of ML in battery operated devices that are power conscious.

On the other hand, NVIDIA’s greatest strengths are in its software support and vendor ecosystem. NVIDIA offers Software Development Kits (SDKs), tools, and libraries for ML engineers across multiple domains, including image classification, object detection and segmentation, speech and language processing, and path planning. Unlike other vendors, the uniformity of NVIDIA’s GPU architecture allows ML developers to train the models in the cloud before deploying them on the same hardware architecture in edge devices using TensorRT. For developers that look at existing models to simplify their model development process, the recently launched NVIDIA TAO Transfer Learning Toolkit makes it easier for them to further adapt pre-trained ML models by NVIDIA for specific use cases. Most importantly, as new deep learning networks are emerging every month and bringing better accuracy and performance, GPU architecture can support all emerging models, encouraging enterprises to experiment and further optimize their operations.