Making Machine Learning More Viable: Bridging the Gap between Investment and Return on Investment

Subscribe To Download This Insight

By David Lobina | 3Q 2022 | IN-6681

Vendors often do not obtain a return on their investment on applications enabled by Machine Learning (ML), a situation that can be mitigated by exploiting the value of clear use cases in specific scenarios, and which can be achieved through hardware or software solutions.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Machine Learning Is Not a Low-Hanging Fruit

NEWS


Everyone is interested in Artificial Intelligence (AI), and more so in the private sector, where Machine Learning (ML) reigns supreme, but not every company needs ML in their operations. Or said otherwise, most vendors invest in ML in one way or another, but few companies will see clear, tangible results from this. Indeed, according to some figures from Intel, around 80% of its clients invest in ML, but only about 20% see a sizable Return on Investment (ROI). There are various reasons for this state of affairs, from the hype often surrounding certain technologies to the all-too-common unpreparedness from the side of vendors—often, a company is interested in employing ML without a clear idea of how to use it, or even the right staff to assess and implement ML models. Technology can certainly boggle the mind, but it can also untangle confusion. One potential way that ML technology can help bridge the gulf between investment and ROI is through the showcasing of specific use cases vendors can use (no pun intended) and deploy, either through software or hardware.

Hardware and Software Solutions

IMPACT


Enter GrAI Matter Labs (GML) and Wallaroo, two examples of hardware or software solutions to address the Big ML Investment Conundrum, respectively. With its neuromorphic inference processor, the GrAI VIP System-on-Chip (SoC), a low-power and low-latency processor, GML has recently entered the so-called “infotainment” space. At IFA-2022 in Berlin this month, GML announced a partnership with StreamUnlimited, a developer of connected audio and the Internet of Things (IoT), and jointly demonstrated how to conduct real-time “source separation” with the GrAI VIP chip, for both music and audio. In the case of music, this involves separating various aspects of the input, say subtracting the vocals from the music in a song, or the other way around, while in the case of other media, such as speech, say a conversation within a noisy setting, or a talk at a busy conference, this would implicate separating the content of interest from all other extraneous sounds; for instance, isolating the speech of a speaker in a talk from the rest of the aural record. These are but a small sample of the plethora of use cases that source separation can be applied to, in real time and through the employment of portable, edge ML processors of the kind GML is commercializing. This should bode well for ABI Research’s forecast that ambient sensing and audio signal processing are the two applications of Edge AI and TinyML that will dominate the market in the next 5 years, as explained in the recent report TinyML: A Market Update (AN-5636).

Wallaroo, for its part, is focused on operationalizing ML with a software solution, a topic often termed Machine Learning Operations (MLOps). The overall cycle of ML modeling typically involves data preparation, model training, and model deployment (plus model monitoring), the latter stage falling under the purview of MLOps, and itself decomposable into steps such as model management, model observation, and model optimization. Wallaroo calls this stage the last mile of the ML cycle, and its software solution, a platform that aims to provide integration with any kind of data and ML ecosystem, is meant to provide, ultimately, actionable ML. Squarely focused on bridging the gap, this time between model development and model production—a figure often mentioned is that only 15% of ML models are eventually sent to production—here, too, there is much emphasis on use cases. A particularly interesting one involves dynamic pricing in e-commerce, a process that returns the most attractive price for customers and necessitates carrying out various iterations of models, their analysis, and a streamlined process to eventually deploy them. Crucially, Wallaroo’s take and approach to the last mile of ML generalizes to other use cases, from connected devices and real-time computer vision to on-premises Natural Language Processing (NLP), and its laser-like focus on the very last few steps of MLOps, in combination with the attention paid to use cases, constitute a unique selling point.

Toward More Viable ML

RECOMMENDATIONS


ML vendors need to make clear what their models can achieve, what use they can be put to, and whether they are, in fact, needed by interested parties. There is a significant amount of wasted resources in ML cycles, not least in the energy consumption required to train models, and this should make vendors pause to reflect before hurrying along to train and run models. In the case of vendors interested in investing in ML, they need to make sure they understand what ML can provide, especially for their operations, and they should also establish a setup within their organization so that they have people who can understand what ML is able to do—and what it isn’t able to do. One way to bridge the gap between investment and ROI, and complementary, between ML vendors and vendors that wish to invest in ML vendors, is through leveraging hardware and software solutions for specific use cases and scenarios. ABI Research believes that the examples of GML and Wallaroo are illustrative of what ML can achieve when properly directed and when the use cases are properly targeted, and we expect to see more developments along these lines.

Services

Companies Mentioned