AMD Expands Partnership with Meta on 6 GW of AI Compute, but Collaboration Goes Deeper
By Paul Schell |
02 Mar 2026 |
IN-8061
Log In to unlock this content.
You have x unlocks remaining.
This content falls outside of your subscription, but you may view up to five pieces of premium content outside of your subscription each month
You have x unlocks remaining.
By Paul Schell |
02 Mar 2026 |
IN-8061
NEWSAMD Will Provide up to 6 GW of Compute |
AMD and Meta announced a multi-year agreement under which Meta has confirmed the deployment of 1 Gigawatt (GW) of compute for 2026, increasing to 6 GW over five years if certain performance and share price conditions are met. The rollout will begin in 2H 2026 with a shipment of a custom Instinct Graphics Processing Unit (GPU) based on AMD’s MI450 architecture, paired with 6th-Gen AMD EPYC Venice and Verano Central Processing Units (CPUs). This is in conjunction with the use of AMD’s Radeon Open Compute (ROCm) software stack (similar to NVIDIA’s Compute Unified Device Architecture (CUDA)) and the AMD Helios rack-scale architecture (a joint Open Compute Project (OCP) spec design with Meta). Significant implications of this announcement include:
- AMD’s sale of a customized GPU using the chiplet architecture used in the Instinct platform optimized for Meta’s workloads. This is distinct from a custom Application-Specific Integrated Circuit (ASIC) and leverages the relative ease of customization inherent to chiplet-based designs over monolithic chipsets.
- Deeper collaboration on software optimizations for ROCm on common Artificial Intelligence (AI) frameworks for both inference and training workloads. Progress on ROCm’s libraries and kernels will largely be transferable to other customers’ workloads and benefit the wider developer ecosystem deploying on Instinct.
- Shipments begin in 2H 2026, closely aligned with Meta’s DC buildout, although AMD Chief Executive Officer (CEO) Lisa Su also warned of the potential for supply chain tightness to affect this timeline.
This news comes alongside the recent AMD inflection point: data center GPU revenue overtaking CPU sales, reflecting significant demand from AI data center build-out and supply chain diversification by hyperscalers, neoclouds, AI labs like OpenAI and Anthropic, and other data center operators.
IMPACTMore Than Another "Circular" Investment in AI Infrastructure? |
The deal is similar to another struck between AMD and OpenAI in October 2025, through which these “lighthouse” customers can exercise a warrant to purchase 160 million shares at US$0.01 each, vesting in tranches based on share price performance and technical delivery milestones—both of which are redacted in the associated 8-K filing. Broadly, this differs from other “circular” deals made by NVIDIA, where AMD’s competitor has taken a stake in its customer, rather than the other way around. This signals several points, alongside wider industry trends:
- AMD is looking for deeper collaboration between developers at the forefront of innovation, seeking input on software optimizations and product roadmaps. This also gives Meta and OpenAI greater visibility into—and influence over—the future of AMD’s silicon and systems.
- NVIDIA is still the dominant player, but AMD is increasing efforts to catch up, particularly on its ROCm software stack, GPU accelerator libraries, and kernels.
- While the level of confidence in AMD systems is still behind NVIDIA’s, these deals still represent a significant vote of confidence in its systems and the maturation of its hyperscale systems-level offering, underpinned by the expertise internalized after the ZT Systems acquisition.
Among the most significant trends underpinning these deals is the diversification strategy of AI infrastructure owners. Supply chains remain tight, and Taiwan Semiconductor Manufacturing Company (TSMC), memory vendors such as SK Hynix, Samsung, and Micron, and chip packaging players are racing to build out supply capacity. Of the large, incumbent silicon vendors, only NVIDIA and AMD offer a mature AI systems solution product able to serve a broad range of customers, with Intel still racing to catch up and leverage its GPU and ASIC Intellectual Property (IP) for modern AI data centers. This diversification has led to the unique deal between Anthropic and Google for the latter to supply its custom Tensor Processing Units (TPUs) previously strictly reserved for in-house workloads. Ultimately, AMD’s bet on reserving capacity at the only manufacturer currently able to service the most advanced semiconductor nodes at scale, TSMC, has paid off, as NVIDIA continues to supply as many chips as it can produce, and customers continue to look at other data center accelerator options.
RECOMMENDATIONSAMD Must Leverage Its Partnerships to the Full Extent |
The use of warrants in the deals between AMD, Meta, and OpenAI means that more power rests in the hands of the customers, but there is also a tighter alignment of commercial interests: If AMD’s longer-term efforts succeed, the vested shares will be a huge boost for Meta and OpenAI. But this will only materialize if AMD’s efforts to gain market share from NVIDIA continue to bear fruit, and there are particular areas of the technology stack where collaboration is most needed:
- OCP Designs: The dissemination of the Open Rack Wide (ORW) spec follows a similar path to NVIDIA’s NVL72 spec, which is very specific to AMD (and Meta). This is an extremely dense and expensive design and only appropriate for a narrow subset of data center environments. AMD should increase efforts to leverage the hyperscale systems-level expertise from ZT Systems to influence Open Compute Project (OCP) specs appropriate for other environments—in particular, on-premises, as the agentic cycle picks up.
- Networking and Interconnect: Helios will use Ultra Accelerator Link (UALink) over Ethernet (UALoE), which is a watered-down version of the full spec, as there is currently no viable third-party Serializer/Deserializer (SerDes) alternative for scale-up. This combines Broadcom’s SerDes with UALoE. How UALink develops over time—and its position in the market vis-à-vis NVLink—remains unclear, and AMD should provide more clarity, and assurances should it choose to fully commit to UALink in the long term.
- Warrant Exercise Conditions: Publicizing—even partially—the technical conditions that must be met for the shares to vest would give confidence to prospective customers that certain shortcomings are being addressed. These will likely pertain to hardware shipment timelines, system integration, regulatory approvals, and, most importantly, software performance on ROCm.
Even though AMD is sacrificing another significant equity stake, the software collaboration with Meta will be leveraged by other customers, boost the performance, and broaden the applicability of Instinct GPUs for more AI workloads in the future. This will trickle down to other potential customers and increase the Serviceable Addressable Market (SAM) as the Instinct platform product aligns with evolving customer needs, which have, to date, been largely steered by the innovations and breakthroughs made by AI labs such as Meta (notwithstanding its model development hiccups of 2025) and OpenAI.
Written by Paul Schell
- Competitive & Market Intelligence
- Executive & C-Suite
- Marketing
- Product Strategy
- Startup Leader & Founder
- Users & Implementers
Job Role
- Telco & Communications
- Hyperscalers
- Industrial & Manufacturing
- Semiconductor
- Supply Chain
- Industry & Trade Organizations
Industry
Services
Spotlights
5G, Cloud & Networks
- 5G Devices, Smartphones & Wearables
- 5G, 6G & Open RAN
- Cellular Standards & Intellectual Property Rights
- Cloud
- Enterprise Connectivity
- Space Technologies & Innovation
- Telco AI
AI & Robotics
Automotive
Bluetooth, Wi-Fi & Short Range Wireless
Cyber & Digital Security
- Citizen Digital Identity
- Digital Payment Technologies
- eSIM & SIM Solutions
- Quantum Safe Technologies
- Trusted Device Solutions