Intel Brings AI to PCs at Innovation, but Making a Commercial Success out of On-Device AI Requires Further Thought

Subscribe To Download This Insight

By Reece Hayden | 4Q 2023 | IN-7089

The decision to extend Meteor Lake processors to Personal Computers (PCs) made headlines at Intel Innovation and adds further weight to the trend toward on-device Artificial Intelligence (AI) spurred by Qualcomm and AMD’s announcements earlier this year. But hardware is just one of three components that needs to come together before these companies can start celebrating. Vendors must start focusing on supporting and enabling software innovation and creating an Return on Investment (ROI)-led Go-to-Market (GTM) strategy.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

PC AI Takes Center Stage At Intel Innovation, but It Is Not Alone in the On-Device-Market

NEWS


Existing Artificial Intelligence (AI), especially generative AI, was born in and still functions mainly in the cloud. There are multiple reasons for this: massive compute power needed for training and inferencing, especially with the advent of billion to trillion parameter language models; size of memory necessary to support huge models and datasets; and power consumption far exceeds the capabilities of edge environments or devices. But from an end-user perspective, the edge or device makes more sense for AI deployment as it reduces latency, eliminates networking costs, improves data security, and reduces cloud lock-in, and most importantly, ensures that user data are protected. Silicon dealing with edge AI inference has already shown significant growth with computer vision and graph-based AI models increasingly being deployed at the edge and in some devices (e.g., smartphones), but Personal Computers (PCs) have, until now, been largely untouched.

This is changing. At Intel Innovation, Intel Chief Executive Officer (CEO) Pat Gelsinger’s most exciting announcement centered on AI PCs. He announced that by 2024, Intel’s new core processor, Meteor Lake (aka Core Ultra) would be deployed on laptops and PCs to support on-device AI inferencing. Meteor Lake processors are innovative chiplets with a tailed architecture, including a core performance Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Input/Output (I/O) interface, and Intel’s first-ever Neural Processing Unit (NPU) for handling typical AI inference workloads. More advanced workloads, including generative AI may be handled by the combination of the CPU, GPU, and the NPU, depending on the use case. This move underpins Intel’s wider strategy to regain primacy in the PC/laptop hardware market and shift commercial exposure away from the (NVIDIA dominated) AI data center market. As part of Intel’s showcase, it demonstrated use cases across modalities using Stable Diffusion and Llama 2, as well as simpler data analytics AI algorithms.

Intel’s decision to extend Meteor Lake to the PC looks like a good decision. It will spur further innovation within its software ecosystem, especially focused on productivity-related AI tools, which improve their position in the wider AI ecosystem. In addition, its “chiplet” approach will be received well by Original Equipment Manufacturers (OEMs). It will offer a more flexible and custom approach to the short AI innovation cycle. It will also save significant energy by improving management of applications and hardware resources, also improving PC battery life. The PC market needs a shot in the arm and Intel’s AI deployment may offer just that.

But Intel is not the only vendor looking to build on-device AI. Qualcomm has unsurprisingly been bullish on this opportunity with a strong partnership with Meta. Announcements have been flowing all year, starting with the deployment of Meta’s Llama 2 on Android devices leveraging the Snapdragon platform. Recently, it extended its partnership with Meta to provide AI hardware for Quest 3 and Meta’s new Ray-Bans. AMD has also been active in this market, targeting laptops with Ryzen AI hardware (dedicated AI processing silicon on x86), alongside Radeon software to support accessibility.

Hardware Is Just the Start of the Story; Software Is the Next Chapter

IMPACT


With Intel, Qualcomm, and AMD making waves in the on-device AI market, plenty is still needed on the software side to complete the circle. Three problems stick out when we look at deploying AI inferencing (and potentially training) on devices: 1) AI power consumption; 2) growing memory requirements for large models; and 3) inefficient and ineffective distributed training and fine-tuning methods. ABI Research has identified several ways that software innovation can begin to solve these challenges and accelerate on-device AI’s viability:

  • Accelerate Innovation in Tailored, Fine-Tuned AI Models: Market headlines remain focused on bigger and “better” models as they seek improved output performance and accuracy. However, these “giant” models with billions or even trillions of parameters/data points are resource intensive and require massive memory, and so are not fit for the device. Instead, Research and Development (R&D) should look to accelerate development of “tailored” models with less than 15 billion parameters. These already have “good enough” accuracy for most use cases at the device level and more effectively fit the hardware constraints at the device level. Already, big tech has shown advances in this area with Meta’s Llama 2 7B, DALL-E 2 (c. 3.5 billion parameters), and Stable Diffusion (c. 1 billion parameters), but more needs to be done, especially around fine-tuning these models for specific device-level use cases.
  • Support On-Device Learning Advancements: Devices create a huge amount of data that can be used for training models centrally and fine-tuning on-device models. However, on-device learning techniques are still in their early stages. Significant advancements need to be made in few-shot and federated learning. From a consumer perspective, lowering risk requires continuous model learning using limited, unstructured/unlabeled data. From the vendor’s perspective, they need to be able to use data created by devices to train centralized models, while retaining privacy guarantees, like federated learning.
  • Device-Specific Fine-Tuning & Optimization Techniques: Moving AI to devices requires more efficient, condensed models that can still effectively perform in a resource-constrained environment. Focus on hyperparameter tuning and other optimization techniques should be front and center of any on-device software strategy.
  • Hybrid AI Should Be the Goal: On-device AI is not a universal solution. Instead, hybrid AI should be the goal for vendors. Developing a software platform that sits above and can seamlessly transfer workloads from the device and edge to the cloud depending on compute or financial requirements (performance, cost, security, etc.) will help alleviate AI fears and enable seamless experiences.
  • Device-Level Security: Accelerating either enterprise or consumer adoption of on-device AI means convincing buyers that their proprietary data are safe. Independent Software Vendors (ISVs) need to build security-focused solutions that create “walled gardens” around device data.

Qualcomm has already been very actively supporting software advances. Its announcement with Meta was just the start. It has also demonstrated how Stable Diffusion can be deployed on smartphones and recently partnered with Microsoft to scale AI capabilities on devices across consumer, enterprise, and industrial domains. This is particularly interesting given Microsoft’s recent activity embedding Copilot across its suite of productivity tools. Intel has demonstrated generative AI models via its new PC AI, but it remains unclear if it is partnering with or simply using in-house generative models.

Commercial Success Is the Conclusion, but It Will Be Tough to Achieve

RECOMMENDATIONS


Completing the technological circle with deep software innovation will not be sufficient to make on-device AI targeted at laptops/PCs or consumer devices (like Augmented Reality (AR)/Virtual Reality (VR) headsets) a commercial success. Development costs will be extraordinarily high, but with highly elastic demand in the device market, these R&D costs cannot be fully pushed onto the consumer through an increased Average Selling Price (ASP) without a clear Return on Investment (ROI). On top of this, both markets have shown market saturation as technology refresh cycles in the Business-to-Business (B2B) and Business-to-Consumer (B2C) markets continue to lengthen. To overcome these market inhibitors, vendors should focus on proving ROI, which relies on software to build a “productivity-led” value proposition:

  • Productivity and ROI Are the Key Messages: Convincing consumers or enterprises to refresh usable and high-performance technology will be the primary challenge for vendors. Only by quantifiably proving how on-device AI can drive individual productivity by saving time and operation cost (e.g., less power consumption and longer battery life), and improving output quality at an acceptable cost will vendors be able to drive sufficient demand to justify heavy investment spending.
  • Engaging with Generative AI Leaders Will Help Make News Headlines, but Enabling All AI Frameworks Is Equally as Important: Obviously, generative AI will be an important component of on-device AI, and showcasing Stable Diffusion or other multi-modal generative models will certainly highlight the hardware capabilities, given the complexities involved. But other AI frameworks should not be forgotten. On-device AI must be able to support and mesh predictive AI to effectively apply that to enable different services like productivity applications, depending on requirements. Using generative models for all use cases is at best unnecessary, but at worst unfeasible due to performance and power constraints.
  • Vendors Should Build “Go-to-Market” Strategy around Killer Applications, Not Hardware: Building this message around ROI and productivity is not about hardware, but about applications, and more than that, finding the killer application that resonates across consumers and enterprises. Partnering with “productivity-focused” ISVs like Microsoft and Adobe can help build this message.
  • Open Source Will Help: Developing killer, new AI productivity applications is essential for on-device AI’s value proposition. Building around an open-source strategy that focuses on giving access to developers and spurring the pace of innovation will help. Intel’s commitment to this is clear with Developer Cloud, OpenVINO, and oneAPI, but it is now necessary to find ways to tie this into its messaging for AI PCs.
  • Understand Potential Customers: The B2B and the B2C markets will be very different propositions. The B2B market will be more receptive to this messaging. Not only will these AI-infused devices support knowledge worker productivity, but they can also be applied within more industrial domains in different end-user devices. Manufacturing across verticals should be explored as a strong market. One area that could be interesting is gaming. This is a massive consumer market with significant on-device AI opportunities and customers willing to pay big bucks for enhanced experiences, so ABI Research recommends that this could be a potential adjacent channel to market for on-device AI hardware.

For Intel, Qualcomm, AMD, and other vendors looking to build out this market space, one question will keep being asked: will NVIDIA turn its attention to this market? ABI Research’s answer is yes. NVIDIA not only has the hardware, but it also has AI models capable of running at the device level. In addition, its DGX Cloud walled garden already boasts plenty of developers creating applications and it has started to build out partnerships with ISVs (most notably, its partnership with Hugging Face, enabling and on-boarding developers onto its platform). On top of this, NVIDIA has experience at the device level and a dominant position in the all-important gaming market. All in all, NVIDIA could make a very strong play in this market, and competitors should be wary.

 

Services

Companies Mentioned