AI Infrastructure Is Creating a Structural Memory Crunch Across the Technology Industry
By Malik Saadi |
25 Mar 2026 |
IN-8084
Log In to unlock this content.
You have x unlocks remaining.
This content falls outside of your subscription, but you may view up to five pieces of premium content outside of your subscription each month
You have x unlocks remaining.
By Malik Saadi |
25 Mar 2026 |
IN-8084
NEWSMWC26 Barcelona and embedded world 2026 Highlight Industry Anxiety Around Memory Supply |
Memory supply constraints emerged as one of the most frequently discussed topics across MWC26 Barcelona. Conversations with semiconductor vendors, device manufacturers, hyperscalers, and network equipment suppliers consistently pointed to growing concerns about the availability and cost of semiconductor memory as Artificial Intelligence (AI) infrastructure continues to scale globally.
Demand from data centers, AI data centers in particular, deploying increasingly powerful AI clusters to support large-scale training and inference workloads is putting major constraints on the memory market as a whole. These systems require vast quantities of memory—particularly High-Bandwidth Memory (HBM)—to feed data into AI accelerators from companies such as NVIDIA and AMD. At the same time, memory manufacturers, including Samsung Electronics, SK hynix, and Micron are increasingly prioritizing production of HBM and server-class Dynamic Random-Access Memory (DRAM) modules that command higher margins and support the growing AI infrastructure market.
The resulting reallocation of manufacturing capacity toward AI-oriented memory products is tightening supply for mainstream memory used in smartphones, Personal Computers (PCs), networking equipment, and other consumer electronics devices. This shift is raising concerns across the technology supply chain that increasing memory costs could translate into higher Bill of Materials (BOM) costs and, ultimately, higher retail prices for a wide range of digital products.
Indeed, every single discussion we had at MWC26 indicated that industry players across the supply chain are increasingly anxious about the shortage of memory and the potential impact on end-device pricing. However, the issue is systemic across the technology ecosystem. Because most companies depend on the same small group of memory suppliers, the shortage is unlikely to significantly influence competitive differentiation, even though it may affect margins, product roadmaps, and device pricing.
embedded world 2026 reinforced and extended the memory anxiety expressed at MWC26, this time from the perspective of the industrial and embedded systems community. While MWC26 conversations were largely dominated by hyperscalers, device Original Equipment Manufacturers (OEMs), and network equipment vendors, embedded world surfaced the impact on broader and often overlooked segments of the technology supply chain: industrial automation, medical electronics, infrastructure control, and the Internet of Things (IoT). This impacts hardware players using more performant Systems-on-Chip (SoCs), such as Intel’s Core Ultra 3 platform from its client portfolio (with integrated Graphics Processing Units (GPUs) and Neural Processing Units (NPUs)) that are reporting much longer lead times of late. Vendors deploying lower-end Microcontroller Units (MCUs) remain more optimistic as the flash memory on their systems is made on separate process nodes from that of DRAM and HBM.
The embedded sector is experiencing a particularly acute form of the crisis. Wafer capacity previously allocated to industrial-grade embedded memory is being systematically redirected toward HBM production, stripping the embedded supply base of its foundational components. Mature-node capacity in the 28 Nanometer (nm) to 65 nm range—the backbone of embedded and industrial designs—is no longer expanding. Instead, it faces hidden capacity shrinkage as aging equipment goes unmaintained and Capital Expenditure (CAPEX) flows almost exclusively toward advanced AI-driven nodes. Embedded OEMs such as Advantech and Kontron, with competitive models that rest on long lifecycle guarantees, now report spot-market premiums of 300% to 500% and hardware lead times consistently exceeding 30 weeks. DDR4, still widely used in embedded and industrial systems, is effectively being treated as obsolete by suppliers, even as meaningful demand persists, forcing costly and disruptive BOM transitions.
On the innovation side, embedded world 2026 showcased a wave of next-generation modular memory solutions designed to improve efficiency in constrained environments. Innodisk’s LPCAMM2 Wide Temp module—delivering LPDDR5X speeds at a 60% reduction in mounting footprint—and similar CAMM2-based architectures signal the industry’s shift toward memory designs that prioritize bandwidth and thermal performance in space-constrained edge deployments. These innovations reflect an emerging design philosophy: in an era of structural memory scarcity, efficiency of architecture matters as much as availability of supply.
The concerns raised at MWC26 and embedded world 2026 strongly align with ABI Research’s previous analysis of the memory market, which has highlighted the structural shift in demand driven by AI infrastructure. See the following for more information: “The Memory Super-Cycle: Can Used GPUs Alleviate AI Market Demand?,” “Mobile Memory Technologies: The Next Competitive Frontier for On-Smartphone Gen AI and Agentic Intelligence,” and “Impacts of Memory Shortage on the eSIM Smartphone Market.” Together, these studies highlight how the rapid expansion of AI computing is fundamentally reshaping the economics of the memory market.
IMPACTAI Demand Is Reallocating Memory Supply Across the Entire Technology Stack |
The discussions at MWC26 and embedded world 2026 reinforce ABI Research’s view that the current memory shortage is not simply a cyclical semiconductor supply issue, but rather the result of a structural shift in demand dynamics.
For more than a decade, consumer electronics—particularly smartphones and PCs—were the dominant drivers of memory demand. Today, AI infrastructure is rapidly overtaking those markets. AI servers require significantly greater memory capacity and bandwidth than traditional enterprise computing systems, driving a surge in demand for HBM and high-performance DRAM modules.
Because memory manufacturing capacity is limited and concentrated among a small number of suppliers, increased AI demand effectively reallocates supply across the technology ecosystem. As suppliers prioritize higher-margin AI-oriented memory products, availability of memory for mainstream device markets becomes increasingly constrained.
This shift is already beginning to affect the cost structure of multiple device categories. Memory accounts for a substantial share of the BOM for many products—including smartphones, PCs, and networking equipment—meaning price increases can quickly cascade into higher retail prices. Lower-cost device segments are particularly exposed to this dynamic, as memory represents a larger proportion of their overall production cost and any increase in memory prices will erode margins more quickly compared to larger systems, such as AI servers, where compute and networking silicon demand relatively higher prices as a proportion of the entire system.
However, the systemic nature of the memory shortage means that the challenge affects virtually all market participants. Because the same limited group of suppliers dominates advanced memory production, the resulting supply constraints are unlikely to create sustained competitive advantages for individual device vendors. Instead, the shortage represents a shared industry constraint that may increase costs across multiple markets simultaneously.
More broadly, ABI Research’s analysis of mobile AI architectures suggests that the memory issue extends beyond supply availability alone. As Generative Artificial Intelligence (Gen AI) and agentic systems become integrated into devices, networks, and data centers, memory bandwidth, memory hierarchy design, and data movement efficiency will become increasingly important determinants of system performance.
In other words, the industry is approaching a period in which memory architecture will play a central role in defining computing capabilities across both cloud and device markets.
RECOMMENDATIONSPrioritize Memory Efficiency and System-Level Optimization |
Given the structural nature of the memory supply challenge, companies should focus on strategies that improve system efficiency, rather than relying on short-term supply expansion.
- First, device manufacturers and infrastructure vendors should prioritize memory-efficient system design. Software optimization techniques—including model compression, quantization, and optimized inference pipelines—can significantly reduce memory requirements while maintaining performance.
- Second, semiconductor companies and platform providers should accelerate development of next-generation memory architectures. Innovations in advanced packaging, memory hierarchy design, and next-generation memory technologies such as LPDDR6 will play an important role in supporting future AI workloads across both infrastructure and device environments.
- Third, companies across the technology ecosystem should strengthen supply chain resilience through longer-term procurement agreements and diversified sourcing strategies. While these approaches cannot eliminate supply constraints, they can help reduce volatility and improve planning for product development cycles.
- Finally, organizations should recognize that memory constraints are likely to remain a recurring feature of the AI era. As AI models grow in scale and complexity, memory capacity and bandwidth will increasingly shape the pace at which computing infrastructure can expand.
In this environment, the companies best positioned to succeed will not necessarily be those with the greatest access to memory supply, but those capable of architecting systems that deliver higher performance through more efficient use of memory resources.
Written by Malik Saadi
- Competitive & Market Intelligence
- Executive & C-Suite
- Marketing
- Product Strategy
- Startup Leader & Founder
- Users & Implementers
Job Role
- Telco & Communications
- Hyperscalers
- Industrial & Manufacturing
- Semiconductor
- Supply Chain
- Industry & Trade Organizations
Industry
Services
Spotlights
5G, Cloud & Networks
- 5G Devices, Smartphones & Wearables
- 5G, 6G & Open RAN
- Cloud
- Enterprise Connectivity
- Space Technologies & Innovation
- Telco AI
AI & Robotics
Automotive
Bluetooth, Wi-Fi & Short Range Wireless
Cyber & Digital Security
- Citizen Digital Identity
- Digital Payment Technologies
- eSIM & SIM Solutions
- Quantum Safe Technologies
- Trusted Device Solutions