Nvidia's $2 billion wager on Marvell is not an investment; it's a toll gate.

Nvidia's $2 billion wager on Marvell is not an investment; it's a toll gate.

      Nvidia has allocated $2 billion to Marvell Technology, integrating the chip maker into its NVLink Fusion ecosystem, thereby forming a partnership that includes custom AI accelerators, silicon photonics, and 5G/6G infrastructure. This agreement ensures that all custom chips Marvell creates for hyperscalers like Amazon, Google, and Microsoft continue to yield revenue for Nvidia through required platform components, effectively transforming what seemed like a competitive challenge into an ecosystem tax.

      On Monday, Nvidia disclosed its $2 billion investment in Marvell Technology and established a strategic alliance focused on NVLink Fusion, the rack-scale platform enabling third-party silicon to connect directly to Nvidia’s proprietary interconnect fabric. Following the announcement, Marvell’s stock rose by nearly 13 percent, while Nvidia’s increased by 5.6 percent. The market interpreted this as a deal, but a more accurate assessment is that it represents a policy of infrastructure implemented in silicon.

      Under this partnership, Marvell will provide custom XPUs and NVLink Fusion-compatible scale-up networking, while Nvidia will supply all other necessary components, including Vera CPUs, ConnectX network interface cards, BlueField data processing units, NVLink interconnects, and Spectrum-X switches. Furthermore, they will collaborate on silicon photonics, a technology that utilizes light rather than copper to transfer data between chips at the speeds demanded by next-generation AI clusters. Jensen Huang, Nvidia’s CEO, expressed this in grand terms, noting, “The inference inflection has arrived,” and highlighted the surge in token generation demand as the world strives to establish AI factories.

      The strategic nuance lies within the architecture of NVLink Fusion itself. Each NVLink Fusion platform must incorporate at least one Nvidia product—be it a CPU, GPU, or switch. Nvidia also decides which partners can obtain NVLink IP licenses. This implies that the custom AI accelerators designed by Marvell for hyperscalers, specifically intended to lessen their reliance on Nvidia GPUs, will still generate revenue for Nvidia with every deployed rack. As Tom’s Hardware suggested, it acts as a tax on custom ASICs.

      This agreement is an extension of a clear pattern. Nvidia has recently made a series of $2 billion investments, which include stakes in CoreWeave, Nebius, Synopsys, Coherent, and Lumentum, each aimed at a distinct layer of the rapidly developing AI infrastructure stack: cloud providers, chip design tools, optical networking components, and now custom silicon. The unifying theme is that each investment increases the recipient's reliance on Nvidia’s platform, while Nvidia gains both financial exposure to and architectural influence over potential rivals.

      Marvell stands out as a fascinating choice since its fastest-growing sector is the design of custom AI accelerators that hyperscalers are employing to replace Nvidia GPUs. The company’s customized AI XPU business generated $1.5 billion in fiscal 2026 and is projected to double by fiscal 2028. Currently, Marvell is working on 18 active custom silicon projects, with 12 devices for clients such as Amazon, Google, Microsoft, and Meta, alongside six for emerging AI customers.

      Amazon's Trainium chips, Microsoft's Maia accelerators, and Google’s TPUs are all linked to Marvell’s design capabilities. By investing $2 billion and integrating Marvell into NVLink Fusion, Nvidia has effectively ensured that the company creating its competitors’ solutions is also compensating Nvidia for the resources they need.

      Since its launch at Computex, NVLink Fusion has quickly expanded its partner roster, with Samsung Foundry joining in October for manufacturing support on its 3nm and 2nm nodes, and Arm becoming a partner in November, allowing its licensees to create CPUs with native NVLink connectivity. SiFive joined in January, introducing RISC-V to the ecosystem. The original partners also included Fujitsu, Qualcomm, MediaTek, Alchip, Astera Labs, Synopsys, and Cadence.

      The significance of this broadening partner list lies in NVLink Fusion’s emergence as the default interconnect standard for custom AI silicon, not due to its openness, but because Nvidia’s software ecosystem, particularly CUDA, presents the least resistant option for customers needing immediate hardware compatibility.

      The open alternative, the Ultra Accelerator Link consortium supported by AMD, Intel, Broadcom, Cisco, Google, HPE, Meta, and Microsoft, aims to counteract this type of lock-in. However, UALink is grappling with what analysts call a crisis of the commons, with its members having conflicting priorities, its 128G specification release lagging behind accelerator deployment speeds, and many key members now having Nvidia investments to consider. Nvidia’s financial stakes in companies nominally committed to an open standard raise valid concerns about whether that standard can evolve quickly enough to provide a true alternative.

      For Marvell’s CEO Matt Murphy, this deal addresses a practical limitation. “By linking Marvell’s expertise in high-performance analog, optical DSP, silicon photonics, and custom silicon with

Other articles

NinjaOne offers a free trial. Experience the integrated IT operations platform. NinjaOne offers a free trial. Experience the integrated IT operations platform. NinjaOne provides a free trial of its comprehensive IT platform designed for endpoint management, patching, backup, and security for remote teams. The European Commission was breached following the compromise of the open-source security tool Trivy by hackers. The European Commission was breached following the compromise of the open-source security tool Trivy by hackers. CERT-EU linked a 92 GB data breach at the European Commission to TeamPCP, which infiltrated the Trivy security scanner through a supply chain attack. The data was leaked by ShinyHunters. KeeperDB introduces zero-trust access to databases within the framework of privileged access management. KeeperDB introduces zero-trust access to databases within the framework of privileged access management. KeeperDB incorporates database access within a zero-trust PAM platform, minimizing credential sprawl while enhancing security, compliance, and visibility. Nvidia's $2 billion wager on Marvell is not an investment; it's a revenue-generating mechanism. Nvidia's $2 billion wager on Marvell is not an investment; it's a revenue-generating mechanism. Nvidia's $2 billion investment in Marvell enhances the NVLink Fusion ecosystem, guaranteeing that each custom AI chip created for hyperscalers continues to yield revenue for Nvidia per rack. KeeperDB offers zero-trust database access for managing privileged access. KeeperDB offers zero-trust database access for managing privileged access. KeeperDB incorporates database access within a zero-trust PAM framework, which minimizes credential sprawl and enhances security, compliance, and visibility. NinjaOne free trial. Experience the all-in-one IT operations platform. NinjaOne free trial. Experience the all-in-one IT operations platform. NinjaOne provides a complimentary trial of its all-in-one IT platform designed for endpoint management, patching, backup, and security for distributed teams.

Nvidia's $2 billion wager on Marvell is not an investment; it's a toll gate.

Nvidia's $2 billion investment in Marvell enhances the NVLink Fusion ecosystem, guaranteeing that each custom AI chip designed for hyperscalers continues to produce revenue for Nvidia per rack.