The Exciting Ways that Edge Device Solutions Use ML (+ 5 Examples)

Author: Lian Jye Su

Machine Learning (ML) is being integrated into edge solutions ever more commonly, as it improves the user experience via latency reduction, near-real-time analysis, privacy, and security. To reduce the time to deployment, vendors are stepping up to the plate to provide services and platforms that streamline processes. A few examples include IoT edge device management, cloud intelligence, and edge services.

What’s Causing the Demand for Edge ML Solutions?

The diverse nature of edge ML, data privacy regulations, lack of useful big data, and the high cost of development are the main drivers for adopting innovative edge ML solutions. Most of the technologies being used to address edge ML challenges involve data management, data analysis, feature engineering, inference engine generation, visualization, deployment, monitoring, and evaluation.

Edge ML Market Snapshot

Edge ML enablement vendors offer their services via Software-as-a-Service (SaaS) and turnkey solutions. Combined, the global edge ML market will grow at a Compound Annual Growth Rate (CAGR) of 26% between 2022 and 2027, reaching US$5.7 billion by the end of ABI Research’s forecast period. By 2027, SaaS and turnkey solution providers will represent 13% of the total edge ML market that is dominated by chipset vendors.

Global edge ML market revenue for 2021 to 2027

Edge ML Solution Example #1: Amazon SageMaker Neo

Amazon Web Services (AWS) offers a platform called SageMaker Neo, which helps developers ameliorate ML models for edge devices. Whereas developers typically spend a great deal of time optimizing the cloud and edge devices, SageMaker Neo automates the process with high precision. The user simply chooses an ML model already trained in SageMaker Neo, then picks a target hardware platform. With one click, SageMaker Neo applies the combination of optimizations that squeeze out the most performance for the model on the cloud instance or edge device. To make this all possible, Amazon SageMaker Neo leverages Apache TVM and partner-provided compilers and acceleration libraries.

Edge ML Solution Example #2: Microsoft Azure

Just like AWS, Microsoft uses its cloud-based platform, Azure, to provide edge services for customers. Data are collected from an Azure IoT Edge device on the Azure Stack Edge, which is designed to enhance on-device ML inference. Microsoft’s solution relies on other Azure platforms like Azure Machine Learning, Azure Container Instances, Azure Kubernetes Services (AKS), and Azure Functions. Using compute acceleration hardware, local ML models can be compared to data on-premises to achieve greater reliability.

Edge ML Solution Example #3: Latent AI

Silicon Valley-based Latent AI, unlike AWS and Microsoft Azure, is focused solely on providing solutions for edge devices, frameworks, and Operating Systems (OSs). Latent AI Efficient Inference Platform (LEIP) Recipes optimize an ML model by combining a set of instructions with a set of pre-configured assets—enabling accurate executables like object detection. According to the company’s website, LEIP Recipes can reduce time to market by 10X. What once took an ML engineer 3 days to do (ML model formatting) only takes a few hours to complete. Latent AI’s Recipes are optimized for power, latency, and memory for specific models and edge devices.

Edge ML Solution Example #4: Blaize’s AI Studio

Blaize’s cloud-based AI Studio is the first open Artificial Intelligence (AI) no-code platform designed for edge device management and Machine Learning Operations (MLOPs) workflow. Particularly interesting is the marketplace feature that allows developers to access resources in a timely fashion, instead of spending days chasing them down. Automatic edge-aware optimization models come in a variety of packages, which enables developers to choose models that are tailor-made for their unique needs. Blaize’s edge services platform allows enterprises to reach Return on Investment (ROI) more quickly as the platform significantly cuts down on the time to deployment.

Edge ML Solution Example #5: Edge Impulse

Edge Impulse introduced a proprietary compiler in 2020 called Edge Optimized Neural (EON), which is designed to optimize large ML models for edge devices that are resource-limited. ML models can reach optimal performance levels with less memory and storage via Software Development Kits (SDKs) and libraries that assist with quantization. Edge Impulse also offers a tool called EON Tuner, which detects the ML models that are best for a target device and performs end-to-end optimization.

Another great feature of Edge Impulse’s solution is the ability to simulate hardware performance before the actual deployment of an edge device and dissect the accompanying metrics. This allows users to apply ML model changes to all edge devices whenever they want. The only requirement is that everything is done over the cloud.

Winning Formula

It’s important that edge ML enablement vendors provide a flexible platform with plenty of learning resources. At the same time, enabling a broad range of hardware interoperability with the platform is a key predictor of customer acquisition. Additionally, no-code and low-code solutions are highly desirable, as they bring the time to market down considerably for organizations with talent-diverse teams.

To learn more, download ABI Research’s Edge ML Enablement: Development Platforms, Tools, and Solutions research analysis report. This research is part of the company’s AI & Machine Learning Research Service.