The Role of Gateway and Server OEMs in Edge AI Deployment

Subscribe To Download This Insight

By Lian Jye Su | 2Q 2021 | IN-6131

This insight explores the role of gateway and server Original Equipment Vendors (OEMs) in edge AI deployment.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

HPE Factory and Retail of the Future

NEWS


Hewlett Packard Enterprise’s (HPE) vision for the Factory of the Future and the Retail of the Future has come a long way. First announced in 2017, HPE has managed to refine various aspects of its offerings and align them with these visions. This has translated to several commercial successes for the global gateway and server Original Equipment Vendor (OEM), such as with ABB and Seagate.

All these customers point to HPE’s strength in serving as an edge AI system integrator that combines hardware, software, and system design from multiple vendors. Through tight integration of compute, networking, security, and monitoring functions, HPE offers pre-engineered systems for edge AI with industrial grade reliability and security that minimizes installation and configuration time. For manufacturing service providers relying on cloud-based enterprise systems, HPE Edgeline solution enables an edge-to-cloud architecture design, combining AI-enabled smart devices and sensors at the edge with enterprise systems through good data governance practices, edge analytics platform, big data framework, and a series of data connectors and API interfaces.

Edge ML Operations

IMPACT


As detailed in “Edge AI Gateway and Server as Enabler of Distributed Intelligence” (AN-5333), gateway and edge data server OEMs have a lot to offer in facilitating edge AI. Heavily focused on on-premise solutions, manufacturers need scalable gateway and servers to host their Machine Learning (ML) model and the optimal edge AI chipset, often in extreme production environments. In addition, many manufacturers do not have internal team of data scientists and ML engineers to handle edge AI development. An OEM like HPE is able to bring software partners and AI system integrators to assist with ML Operations, commonly known as MLOps, at the edge.

Aside from hardware, HPE offers the HPE MLOps solution that includes a series of operationalization and life cycle management processes, such as model building, training, deployment, and monitoring, as well as collaboration, security and control, and a hybrid deployment. The HPE MLOps solution works with a wide range of open-source ML and DL frameworks, including Keras, MXNet, PyTorch, and TensorFlow, as well as commercial ML applications from ecosystem software partners like Dataiku and H2O.ai. For those needing additional guidance, HPE Pointnext Services, the cloud consulting arm of HPE, offers remote support, as well as in-person trainings to effectively implement these solutions. This allows manufacturers to enjoy end-to-end support from HPE, ranging from AI hardware to software and service, lowering the barrier to AI deployment.

Growing Needs of Edge-Centric Solutions

RECOMMENDATIONS


OEMs like HPE are facing challenges from cloud service providers. In recent years, cloud service providers have rapidly expanded into the edge AI domain. Last December, Amazon Web Services (AWS) introduced Panorama Appliance, a hardware device that enables non-smart internet protocol (IP) cameras that weren’t built to accommodate ML-based computer vision. AWS also partners with Basler and several other camera vendors for its vision service for manufacturing, known as Lookout for Vision. Meanwhile, AWS competitor Microsoft recently launched Azure Percept, an edge AI hardware solution for machine vision that integrates AI development capabilities in Microsoft Azure Cloud. Google has also recently partnered with Siemens, the world leading manufacturing technology provider, in the deployment of AI on factory floors through Siemens Industrial Edge, Google Cloud Platform, and Google Anthos, a multicloud management platform.

The buzz generated from the entry of cloud service providers into the edge AI market is legitimate, as these cloud service providers have strong developer community and scalable AI development, as well as deployment services that are increasingly targeted at edge AI applications. Nonetheless, ABI Research believes that not every enterprise, especially in the industrial and manufacturing space, will leverage public cloud services. In some cases, manufacturers are extremely concerned with their operational data and prefer to keep all their data on premise. In other scenarios, enterprises are bound by stringent data sovereignty laws and privacy requirements, necessitating them to adopt private cloud solutions for their edge AI solutions. Hosting AI solutions at the edge also guarantees low latency and full control over the lifecycle of their edge ML solutions. Serving all these needs are the perfect value propositions of gateway and server OEMs like HPE.

As such, edge AI is going to become a significant revenue driver for OEMs. ABI Research estimates the edge AI server market to be over US$500 million in 2021 and will grow to over US$2.5 billion by 2025. The goal of product design needs to focus on making edge AI deployments as seamless and as cost efficient for technology implementers as possible. For the market to achieve this growth, edge AI vendors must align themselves with key open standards that are garnering momentum in the industry, such as OCP and Open19. Software support is critical, too, as it lowers the barrier to entry. This means OEMs must collaborate with all AI chipset vendors, especially as ML workloads are becoming more diverse, to continue supporting key AI frameworks and establishing partnerships with software vendors that offer ML workflow acceleration, orchestration and management, security, application development, and analytics.