Decoding the AWS AI Playbook from re:Invent 2020

Subscribe To Read This Insight

By Lian Jye Su | 4Q 2020 | IN-6023


A Win for Intel and AWS


In re:Invent 2020, AWS introduced compute instance—or virtual server—powered by two new ML training chipsets: Gaudi from Habana Labs and AWS’s own Trainium. These chipsets will be available by mid-to-late 2021. This inclusion not only is a success story for Intel in their constant competition with NVIDIA in cloud ML chipset but also is another showcase of chipset development strength in AWS and its desire for tighter vertical integration. This is also the first time a developer can distribute multiple training frameworks across multiple hardware using a single AI engine.

At the moment, the majority of ML workloads on AWS are powered by AI chipsets coming from NVIDIA and also increasingly by Inferentia, which is AWS’s own inference chipset. COVID-19 has exposed the weaknesses of single vendor strategy. The disruption in the global supply chain has left many chipset vendors struggling to fulfill their orders and ML chipset prices to skyrocket. The inclusion of new AI chipsets brings more choices to ML developers, diversifies supply chain risk, and allows the company to …

You must be a subscriber to view this ABI Insight.
To find out more about subscribing contact a representative about purchasing options.