GTC '22: Full Stack, Layered Technology will Democratize AI

This is part 1 of a series of blogs covering the NVIDIA GTC '22 event. You can read part 2 here and part 3 here.

Late March 2022 saw the NVIDIA GTC event delivered again to a virtual audience, but the lack of physical interaction did not mean that NVIDIA held back, with big announcements made in all product categories. Two converging themes dominated the event: first, that NVIDIA was now a full-stack technology company offering total solutions in every layer; and secondly, that data centers were evolving into AI factories that generate actionable intelligence to add enormous value to the enterprise through data analysis. This evolution is being facilitated by NVIDIA products across the board.

The democratization of high-performance computing and high-end deep learning workloads has been steadily gathering momentum as heterogeneous computing moves these workloads onto systems that are more capable, efficient, and cost-effective. This trend brings the benefits and value of AI to a much wider enterprise audience. Many industry sectors can be credited with having contributed to this democratization process, from the OEM’s and hyperscalers to the scientific teams working on algorithmic modeling, and everyone in between. In adopting a full-stack approach to the next generation of computing, NVIDIA is cementing its place as a major contributor to this democratization process. In short, the very closely coupled component-level integration and optimization of hardware and software is generating efficiency gains because of the full-stack approach, resulting in increased productivity and reduced operational costs. These benefits are transferred directly and indirectly to enterprise consumers of the stack in multiple ways, including:

  • Robust Solutions – Controlling the full stack ensures that all components integrate fully, with this consideration having been designed into every component in the stack, from the hardware layer all the way up to the applications that run on them.
  • Increased Productivity – Full-stack optimization minimizes wasted processor cycles performing unnecessary code transforms, decreasing the load on all components and increasing overall performance and productivity. This methodology is applied through the whole stack, including the way the applications are coded—it takes the heterogeneous compute model that generates efficiency and productivity through using the optimal resource for the task at hand, and applies to all layers in the stack.
  • Constant R&D and Innovation – Maintaining a common strategy across all product lines will streamline the innovation process, reduce unproductive cycles, and ensure optimizations are consistent with innovation strategy. These innovations will result in faster and less expensive advances in technology while furthering component integration.
  • Improved Compatibility Between Components– Forward and backward compatibility between components will improve as a result of integrated system design. Intellectual property that gives a competitive advantage can be more tightly integrated through ongoing collaborative engineering efforts on current and future products.
  • Certified and Assured Systems Offered – The NVIDIA-Certified systems program defines rigorous architectural and testing criteria for OEM platforms that integrate NVIDIA GPU and networking components. The resulting systems enable enterprises to confidently deploy platforms optimized for performance, scalability, and security.
  • Rapid AI Skill Development - LaunchPad, an AI lab environment accessible from anywhere enables enterprises to develop the foundational skills to build and deploy critical AI use cases and then purchase the same systems directly through the reseller network. This platform accelerates achieving a detailed architectural understanding of the solution that is often required to complete a technology investment.
  • Operational Productivity Increases – Support teams will work in a harmonized environment using management tools that are common across the whole environment, simplifying overall fleet management, and avoiding having to learn operational parameters for multiple disparate systems.
  • Operational Costs Reduced – The tight integration of components and the ability to reduce inefficiencies through the code right down to the hardware components, coupled with the energy-efficient design of those components will reduce operational costs.
  • Developer Productivity Increased – Developing in a harmonized ecosystem with a high level of hardware abstraction will increase developer productivity and reduce the expensive burden on developers to interact at a very low level with components. The software stack can be highly tuned to work with the hardware due to the tight coupling and integration of the components. Familiarity with the software ecosystem, code transportability between layers and code reuse will further increase productivity.

NVIDIA’s technology stack is constructed over four layers:

  • At the top is the Applications Layer. These are the applications that technology consumers interact with most heavily, examples being RIVA for speech AI, or CLARA for healthcare applications.
  • Sitting directly beneath the applications layer, and very tightly coupled to it is the Platform Layer. NVIDIA currently has four platforms:
    • NVIDIA HPC
    • NVIDIA AI
    • NVIDIA RTX
    • NVIDIA Omniverse.
  • The third layer down is the System Software Layer, consisting of collections of domain-specific libraries, tools, and technologies, an example being CUDA-X AI, an AI-focused set of tools built on top of CUDA.
  • The bottom layer is the Hardware Layer, which includes NVIDIA-built appliances and components, such as CPU and GPU.

Whilst this GTC event featured announcements across each layer in NVIDIA’s technology stack, the bias certainly seemed to be centered around their hardware portfolio.

The table below summarizes the major announcements made by layer:

Table 1: NVIDIA GTC March 2022 Major Announcements by Technology Layer                      

Layer

Technology - Announcement

Technical Detail

 

Application

Merlin 1.0

AI Framework for Hyperscale Recommender Systems

 

AI Enterprise 2.0

Expands Support to GPU-Accelerated Bare-Metal Systems, Public Cloud, CPU-Only Servers

 

Riva 2.0

Customizable World-Class Speech AI

 

Availability of Jetson AGX Orin Developer Kit

To advance the fields of robotics and edge AI, customers can leverage the full CUDA-X computing stack, JetPack SDK and access pre-trained models from NVIDIA NGC.

Platform

NVIDIA DRIVE Announcement

BYD and Lucid Motors Adopt NVIDIA DRIVE For Next-Gen EV Fleets

 

NVIDIA AI Enterprise 2.0 – Optimized and supported by NVIDIA

The AI Enterprise software stack will be supported across every data center and cloud platform in the NVIDIA portfolio. Major updates released.

 

NVIDIA – Omniverse Cloud

Suite of cloud services bringing design collaboration to any designer, any device, anywhere

System Software

Over 60 CUDA-X libraries updated

NVIDIA ecosystems are faster and more integrated, increasing productivity as well as capabilities

Hardware

OVX

Infrastructure for digital twins. Data-Center-Scale Omniverse computing system for industrial digital twins. OVX runs Omniverse digital twins for large-scale simulations with multiple autonomous systems operating in the same space-time.

 

NVIDIA Spectrum-4 Ethernet Switch Announced

51.2 Terabits/second. With BlueField-3 DPU and ConnectX-7 SmartNICwill be the first end-to-end 400 Gigabits/second networking platform.

 

NVIDIA H100 (Hopper) Announced

The next-generation engine for AI infrastructure. 80 billion transistor chips, designed to scale up and scale out. Performance, scalability, and security for every data center.

 

NVIDIA H100 CNX Announced

Converged accelerator, connects the network directly to the H100 through the NVIDIA ConnectX-7 SmartNIC networking chip.

 

NVLink-C2C Announced and open to partners.

NVLink-C2C Extends NVLink to Chip-level Integrations. Allowing the opportunity to create semi-custom chips and systems that can leverage NVIDIA’s platforms and ecosystems.

 

Grace CPU Superchip Announced

Two Grace CPUs connected over 900 GB/second chip-to-chip interconnect to create a 144 core CPU. Third pillar to NVIDIA’s 3-chip data center strategy.

 

NVLink Switch

Purpose-Built Network, Scales Up to 256 GPUs.

 

NVIDIA DGX H100

Advanced Enterprise AI Infrastructure; New DGX SuperPOD Delivers 1 Exaflops.

Source: ABI Research

This table also demonstrates how strong the hardware message was at this GTC event, and this comes as no surprise because, with these announcements, NVIDIA cements the full-stack technology status that it has been driving towards with its recent corporate strategy. NVIDIA believes that by offering the full stack experience they can help the enterprise navigate a path through the AI and ML landscape that in its current state is proving challenging to commit to.

Part 2: NVIDIA’s Holistic Approach Marries Technology Components to Create Powerful Union Part 3: Full Stack Approach Accelerates Democratization of Technology

Related Blog Posts

Related Services