Make Distributed Intelligence Deployment Scalable

Subscribe To Read This Insight

By Lian Jye Su | 4Q 2020 | IN-5930

 

The Vision of Scalable Distributed Intelligence Deployment

NEWS


Scaling Artificial Intelligence (AI) has always been a challenge due to its computational cost. For example, viewed as one of the most advanced Natural Language Processing (NLP) models, GPT-3 is estimated to require OpenAI, its developer, to dedicate a memory requirement exceeding 350GB and over US$12 million in training costs. As such, most AI models that are commercially viable and successful are generally restricted to those deployed by companies that own hyperscale data centers, such as Amazon, Apple, Google, Facebook, and Microsoft. Hyperscale data centers provide a scalable architecture for the learning and rollout of these complex AI models, maintaining cost flexibility and ease of upgrade through centralized pools of resources.

However, edge AI paints a very different picture. Most private enterprises have custom AI use cases and a strong preference over their selection of hardware and software. These solutions...

You must be a subscriber to view this ABI Insight.
To find out more about subscribing contact a representative about purchasing options.

Services