Meta has finalized a multibillion-dollar agreement for Amazon's Graviton5 chips as the demand for AI computing exceeds its $135 billion capital expenditure budget.
Summary: Meta has entered into a multibillion-dollar, multi-year agreement with Amazon to utilize tens of millions of Graviton5 ARM CPU cores in AWS data centers for agentic AI workloads. These chips are general-purpose processors rather than AI accelerators, designed to manage the CPU-intensive inference and orchestration tasks essential for real-time reasoning and multi-step agents. This contract is part of a larger procurement initiative exceeding $200 billion, which includes agreements with Nvidia ($50B), AMD ($60B), CoreWeave ($35B), Nebius ($27B), Broadcom (for custom silicon until 2029), and now Amazon, reflecting Meta’s realization that its AI computational needs surpass what any single supply chain can provide.
Meta has confirmed a multibillion-dollar, multi-year contract with Amazon Web Services to deploy tens of millions of Graviton5 processor cores for AI workloads, a partnership announced on Thursday. These chips, featuring 192 Neoverse V3 cores per chip and developed on a 3-nanometre process, are situated in AWS data centers across the U.S. Meta is not purchasing the chips but rather renting the compute capacity. The significance of this agreement lies not in the functionalities of the chips—primarily tasked with handling CPU-intensive inference and orchestration for agentic AI—but in the fact that Amazon is a direct competitor to Meta in advertising, commerce, and increasingly, AI. Meta's investment of billions into Amazon's infrastructure is due to the overwhelming demand for compute power required to operate AI agents, which exceeds what any single entity can create, even one allocating $115 billion to $135 billion for capital expenditures this year.
The workload
The difference between training and inference has shaped the AI chip sector since the deep learning revolution began. Training is a resource-intensive process requiring GPUs or specialized accelerators, while inference involves running a trained model to serve users, which necessitates a unique computational mix. The agentic AI tasks Meta is developing require significantly more CPU power than standard inference. Activities such as real-time reasoning, code generation, searching, and coordinating tasks across multiple models demand extensive general-purpose processing capabilities. Santosh Janardhan, Meta’s infrastructure leader, stated that adopting Graviton allows them to efficiently manage CPU-heavy workloads associated with agentic AI at their scale. Furthermore, AWS vice president Nafea Bshara mentioned that Meta opted for Graviton5 "for price performance" despite numerous supply options.
This agreement begins with tens of millions of Graviton5 cores, allowing for potential expansion, and spans at least three years, with most capacity deployed in U.S. data centers. Previously, Meta had utilized Graviton on a minor scale, but this deal solidifies their reliance on this infrastructure. The newly announced Graviton5 provides a 25% performance increase over its predecessor while also reducing inter-core latency by 33%, despite having double the core count. It is currently accessible through EC2 M9g instances in preview, with C9g and R9g variants expected in 2026. Essentially, Meta is positioning itself as one of the largest individual clients of Amazon’s custom silicon program by utilizing competitor's data centers, as constructing comparable internal capacity would take longer than their agentic AI strategy permits.
The buying spree
The Graviton contract is part of an unprecedented procurement initiative. In February 2026, Meta pledged about $50 billion to Nvidia for millions of Blackwell and Rubin GPUs, Grace and Vera CPUs, and Spectrum-X networking equipment. In the same month, it inked a roughly $60 billion deal with AMD for six gigawatts of custom Instinct MI450 GPUs using the CDNA 5 architecture at 2nm, which includes convertible performance warrants equating to approximately 10% of AMD’s equity. Meta's $35 billion AI cloud partnership with CoreWeave ensures dedicated capacity until December 2032, including early deployments of Nvidia’s Vera Rubin platform for inference. An additional $27 billion deal with Nebius contributes to further AI infrastructure. Furthermore, Meta’s extended partnership with Broadcom through 2029 involves several generations of its custom MTIA processors at 2nm, with over a gigawatt of initial computing power. With the recent multibillion-dollar Graviton agreement with Amazon, Meta's total commitments across these contracts surpass $200 billion, excluding additional costs for data centers, power systems, and internal engineering required to fully utilize the hardware.
In March 2026, Meta introduced four new MTIA chips — the MTIA 300, 400, 450, and 500 — all engineered on the RISC-V architecture and produced by TSMC in collaboration with Broadcom. The company is now capable of rolling out new chip designs every six months or even sooner. The MTIA 400 is the first custom chip Meta claims matches the raw performance of top commercial products, while the 450 and 500 focus on gener
Other articles
Meta has finalized a multibillion-dollar agreement for Amazon's Graviton5 chips as the demand for AI computing exceeds its $135 billion capital expenditure budget.
Meta plans to implement tens of millions of Amazon Graviton5 CPU cores in AWS data centers for agentic AI. This agreement is part of a procurement initiative exceeding $200 billion that includes Nvidia, AMD, CoreWeave, Broadcom, and now a direct competitor.
