Nvidia's $2 billion wager on Marvell is not an investment; it's a revenue-generating mechanism.
Nvidia has committed $2 billion to Marvell Technology, integrating the chipmaker into its NVLink Fusion ecosystem. This partnership encompasses custom AI accelerators, silicon photonics, and 5G/6G infrastructure. The arrangement guarantees that every custom chip Marvell creates for hyperscalers like Amazon, Google, and Microsoft will still generate revenue for Nvidia through required platform components, effectively transforming a potential competitive threat into a revenue stream.
On Monday, Nvidia announced its $2 billion investment in Marvell Technology and forged a strategic alliance focused on NVLink Fusion, a rack-scale platform that enables third-party silicon to integrate directly with Nvidia's proprietary interconnect fabric. Following the announcement, Marvell's stock jumped nearly 13 percent, while Nvidia's increased by 5.6 percent, indicating a positive market response. However, a more accurate interpretation of the deal is that it represents an infrastructure policy defined in silicon.
Under this partnership, Marvell will provide custom XPUs and NVLink Fusion-compatible scale-up networking, while Nvidia will supply other components such as Vera CPUs, ConnectX network interface cards, BlueField data processing units, NVLink interconnects, and Spectrum-X switches.
Additionally, the two companies plan to collaborate on silicon photonics—a technology that utilizes light rather than copper for data transfer between chips, meeting the speed demands of next-generation AI clusters. Jensen Huang, Nvidia's CEO, described this as a pivotal moment, stating, "The inference inflection has arrived," emphasizing the growing demand for token generation and the global rush to establish AI factories.
The strategic nuance lies in the architecture of NVLink Fusion itself. Each NVLink Fusion platform must incorporate at least one Nvidia product, be it a CPU, GPU, or switch. Nvidia also dictates which partners receive NVLink IP licenses. This ensures that the custom AI accelerators Marvell designs for hyperscalers, aimed at reducing dependence on Nvidia GPUs, will still generate revenue for Nvidia with each deployed rack. As noted by Tom’s Hardware, this acts as a tax on custom ASICs.
The deal reinforces a pattern that is increasingly evident. Nvidia has made several $2 billion investments recently, including in CoreWeave, Nebius, Synopsys, Coherent, and Lumentum. Each of these investments targets various layers of the rapidly evolving AI infrastructure stack: cloud providers, chip design tools, optical networking components, and now custom silicon. The commonality among these investments is that they increase the recipients' reliance on Nvidia’s platform, while Nvidia gains both financial stakes and architectural influence over potential rivals.
Marvell stands out as a notable target because its most rapidly expanding segment is the design of custom AI accelerators used by hyperscalers to replace Nvidia GPUs. This segment generated $1.5 billion in revenue for fiscal 2026, with expectations to double by fiscal 2028. Marvell is currently engaged in 18 active custom silicon projects, including 12 for major companies like Amazon, Google, Microsoft, and Meta, along with six for emerging AI clients.
The designs of Amazon's Trainium chips, Microsoft's Maia accelerators, and Google's TPUs all pass through Marvell’s design capabilities. By investing $2 billion and integrating Marvell into NVLink Fusion, Nvidia has ensured that the entity developing its competitors' technologies is also compensating Nvidia for the necessary resources.
Since its introduction at Computex, NVLink Fusion's partner list has grown quickly. Samsung Foundry joined in October to provide manufacturing support for its 3nm and 2nm nodes. In November, Arm became a partner, allowing its licensees to develop CPUs with native NVLink connectivity. SiFive joined in January, incorporating RISC-V into the ecosystem. Original partners also include Fujitsu, Qualcomm, MediaTek, Alchip, Astera Labs, Synopsys, and Cadence.
The significance of this diverse partner list highlights that NVLink Fusion is establishing itself as the default interconnect standard for custom AI silicon—not due to its openness, but because Nvidia's software ecosystem, particularly CUDA, offers the most convenient solution for customers needing immediate hardware compatibility.
An open alternative, the Ultra Accelerator Link consortium supported by AMD, Intel, Broadcom, Cisco, Google, HPE, Meta, and Microsoft, aims to counter this type of lock-in. However, analysts describe UALink as facing a "crisis of the commons," where its members have conflicting priorities, its 128G specification release lags behind accelerator deployment, and some key members currently have Nvidia investments.
For Marvell's CEO, Matt Murphy, this deal resolves a practical challenge. He stated, "By connecting Marvell’s leadership in high-performance analog, optical DSP, silicon photonics, and custom silicon to Nvidia’s expanding AI ecosystem through NVLink Fusion, we are enabling customers to build scalable, efficient AI infrastructure." This means Marvell’s hyperscaler clients seek custom chips that seamlessly integrate with the Nvidia infrastructure already in their data centers, and
Other articles
Nvidia's $2 billion wager on Marvell is not an investment; it's a revenue-generating mechanism.
Nvidia's $2 billion investment in Marvell enhances the NVLink Fusion ecosystem, guaranteeing that each custom AI chip created for hyperscalers continues to yield revenue for Nvidia per rack.
