Make Distributed Intelligence Deployment Scalable

Subscribe To Download This Insight

4Q 2020 | IN-5930

Traditionally, the centralization of Artificial Intelligence (AI) workloads in the cloud brings the benefits of flexibility and scalability. However, the industry has witnessed a shift in the AI paradigm. Edge AI brings task automation and augmentation to device and sensor levels across various sectors. However, making the deployment as scalable and cost friendly as possible is one of the biggest challenges that all stakeholders are confronting. This ABI Insight discusses the three industry responses to the challenge: partner with an ecosystem with a scalable business model, create an open standard for edge AI hardware, and leverage open source projects.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

The Vision of Scalable Distributed Intelligence Deployment

NEWS


Scaling Artificial Intelligence (AI) has always been a challenge due to its computational cost. For example, viewed as one of the most advanced Natural Language Processing (NLP) models, GPT-3 is estimated to require OpenAI, its developer, to dedicate a memory requirement exceeding 350GB and over US$12 million in training costs. As such, most AI models that are commercially viable and successful are generally restricted to those deployed by companies that own hyperscale data centers, such as Amazon, Apple, Google, Facebook, and Microsoft. Hyperscale data centers provide a scalable architecture for the learning and rollout of these complex AI models, maintaining cost flexibility and ease of upgrade through centralized pools of resources.

However, edge AI paints a very different picture. Most private enterprises have custom AI use cases and a strong preference over their selection of hardware and software. These solutions are generally heavily customized to host proprietary AI for targeted use cases. In addition, enterprises often opt for edge AI gateways and servers in micro data centers to host their AI solutions. Needless to say, a micro data center cannot compete in terms of economies of scale and resource efficiency with a regional or hyperscale data center. As the total number of private enterprises is numerous, this has led to massive fragmentation of edge AI hardware, making the vision of scalable distributed intelligence a pipe dream.

The Industry Responded in Three Ways

IMPACT


Based on the current market development, there are three ways to scale distributed intelligence. The first, and also the most obvious, strategy is to partner with an ecosystem that already has a mature scalable business model. In 2019, public cloud vendors took note of the rise of 5G, a global standard for cellular network, and its increasing deployment of edge computing capabilities. Instead of rolling out their own edge infrastructure, the market has witnessed a lot of collaboration and partnership between public cloud vendors and communication service providers. For example, Amazon Web Services (AWS) launched Wavelength, which provides developers the ability to build applications that serve end users with single-digit millisecond latencies over the 5G network. Currently, AWS Wavelength is being piloted by select customers in Verizon’s 5G Edge, as well as other leading Communications Service Providers (CSPs), including KDDI, SK Telecom, Telefonica, and Vodafone. Enterprises that utilize this approach will benefit not only from the edge computing infrastructure, but also from the leading AI frameworks, development platform, and toolkits developed by AWS, such as MXNet and Sagemaker.

The second strategy is to create an open standard in edge AI hardware. Open19, an open standard organization, wants to accelerate edge AI adoption and deployment through a unified form factor, which is a 19-inch server rack. The standard 19-inch rack design helps promote scale among end users and lower the cost of deployment and the level of difficulty while helping vendors retain their points of differentiation. The end goals are to provide lower cost per rack, lower cost per server, optimized power utilization, and, eventually, an open standard that the AI industry can contribute to and participate in. Key players that support Open19 initiatives include Flex, Celestica, Amphenol, ASRock Rack, Delta Electronics, Inspur, Molex, and Wiwynn. They offer a broad range of solutions, from servers to switches, racks, power supply units, and cables.

The final strategy is leveraging existing open source projects. AI chipset market leaders NVIDIA and Intel have been leveraging the advantages of open source community and building up a community of enthusiasts and professionals who create AI solutions based on their chipsets. A good open source community offers great community support and resources and help to develop and maintain AI frameworks and models, making them easily accessible and quick to deploy with low barrier to entry. Recently, QuickLogic took note of this approach and has started to go all-in on enabling open source solutions. The company's initial open source development tools, developed by Antmicro in collaboration with QuickLogic and Google, feature SymbiFlow, an open source Field-Programmable Gate Array (FPGA) design flow optimization tool; Renode, an open source simulation framework for multi-node systems; and the open source Zephyr Real-Time Operating System (RTOS). This allows developers to utilize these open source tools on QuickLogic’s microcontrollers, FPGA, and embedded FPGA technology for distributed intelligence applications.

No doubt, this is likely the most costly strategy for enterprises, as it requires enterprises to construct their own AI teams and customize the open source project to fit their goals. It would also require effort, coordination, and time for the teams to get familiar with the open source project and customize when necessary. However, this process can be much easier if enterprises select the open source project with strong governance. An open source project backed by strong governance will have the resources and foresight to include new architectures quickly and effectively, while being reliable and easy to use when it comes to training speed, simplicity, ease of use, bug fixes, etc.

In Search of the Winning Strategy

RECOMMENDATIONS


Despite all these challenges, it would be terribly unwise for enterprises to forgo the benefits of distributed intelligence and limit themselves to hyperscale cloud services. As detailed in the ABI Insight The Future of Distributed Intelligence with 5G and AI (IN-5885), distributed intelligence through edge AI circumvents the latency requirements, reduces security concerns, and minimizes the reliance on cloud-based AI operation. When deployed at scale, this will create new AI use cases, allowing enterprises to differentiate themselves from cloud AI giants and bringing AI into domains that are underserved by cloud AI due to regulatory and infrastructure limitations.

Given the three strategies mentioned above, it seems public cloud vendors have found the right combinations for offering distributed intelligence solutions to the market. Through partnership with CSPs, public cloud vendors are able to leverage both hyperscale data center and telco edge servers to help their existing customers deploy distributed intelligence. Scalable cost and infrastructure are very appealing for Small and Medium-sized Enterprises (SMEs) and solution providers.

Having said that, ABI Research believes that the strategy to achieve scalable edge AI will be industry specific, since different industries have specific standards, requirements, and end user expectations. This means the push for lower deployment costs through standardized hardware and open source projects still has a big role to play, particularly for larger solution providers. These enterprises are bound to experience longer lead time to deploy distributed intelligence at scale if they opt for the latter two options, but they have the revenue base and human resources to do so, and very much prefer to have full control over their distributed intelligence infrastructure to further differentiate themselves.