Google Cloud strengthens its partnership with Intel on AI infrastructure, focusing on Xeon processors and custom chips.

Google Cloud strengthens its partnership with Intel on AI infrastructure, focusing on Xeon processors and custom chips.

      In summary: Google Cloud and Intel have revealed an expanded multi-year AI infrastructure collaboration that encompasses both CPU deployment and the co-development of custom chips. Google Cloud will persist in using Intel’s Xeon 6 processors for its global infrastructure in C4 and N4 instances, while both companies are enhancing their joint efforts to develop custom Infrastructure Processing Units (IPUs) meant to offload networking, storage, and security tasks from host CPUs in hyperscale AI settings. This announcement comes as Intel’s stock experienced a roughly 33% increase over the week, shortly after the company became the foundry partner for Tesla’s Terafab megaproject.

      The partnership's main argument, as presented by both firms, is that GPU accelerators alone cannot meet the requirements of contemporary AI infrastructure. Lip-Bu Tan, Intel’s CEO, stated in a related announcement: “AI is transforming how infrastructure is created and scaled. Scaling AI necessitates more than just accelerators — it demands balanced systems. CPUs and IPUs are essential for achieving the performance, efficiency, and flexibility that modern AI workloads require.” This language is intentional. Over the past two years, Intel has shifted its focus from the general-purpose computing market it once led to emphasize that the CPU and custom infrastructure silicon play a fundamental role in AI deployments, which GPU-centric views have often overlooked.

      Amin Vahdat, Google’s Senior Vice President and Chief Technologist for AI infrastructure, approached the argument from the demand perspective. He noted, “CPUs and infrastructure acceleration remain vital for AI systems — from training orchestration to inference and deployment.” He added, “Intel has been a reliable partner for almost twenty years, and their Xeon roadmap assures us that we can continue to fulfill the increasing performance and efficiency demands of our workloads.” Framing the partnership as a long-term commitment to a multi-generational CPU roadmap rather than just a one-time procurement deal is crucial, suggesting that Google has made long-term planning decisions based on Intel’s product trajectory, which includes both the Xeon line and joint IPU development.

      The CPU aspect of the partnership focuses on Intel’s Xeon 6 processor family, which Google Cloud has deployed in its optimized C4 and N4 instance types. Google claims that C4 instances provide more than 2.0 times the total ownership cost benefit compared to earlier configurations, a metric that reflects the performance and energy efficiency enhancements that Intel promotes as the core advantage of Xeon 6. The agreement encompasses future generations as well, indicating Google’s alignment with Intel’s Xeon roadmap, and that its infrastructure strategy accounts for future CPU releases as a known factor. Concurrently, Google is strengthening its custom silicon initiatives on the accelerator front, providing Anthropic with around one gigawatt of TPU capacity through Broadcom, ensuring Anthropic’s AI infrastructure through 2027 and beyond — a parallel that illustrates how Google is expanding its infrastructure capabilities across both standard and custom silicon simultaneously.

      The context of CPU architecture is crucial for understanding why this commitment is being made public now. As AI workloads transition from GPU-intensive training phases, which are primarily concentrated among a few hyperscalers, to large-scale inference, which is distributed, latency-sensitive, and runs continuously across extensive server fleets, the cost structure of AI infrastructure evolves. Inference demands sustained CPU resources for orchestration, data pre-processing, and system management that training pipelines do not require. Google’s commitment to Xeon 6 for its C4 and N4 instances partially depends on the belief that inference economics will make CPU efficiency a top priority in the coming years.

      The custom IPU program represents a more strategically significant part of the collaboration, focusing on expanding the co-development of Infrastructure Processing Units. IPUs are custom ASIC-based programmable accelerators intended to manage networking, storage, and security functions that would otherwise be handled by host CPUs, allowing those CPUs to concentrate solely on application and AI workload processing. In hyperscale settings, where these infrastructure tasks consume a significant and increasing portion of computing resources, offloading them to a dedicated accelerator can greatly enhance utilization rates, energy efficiency, and workload performance consistency. Intel and Google have been working together on IPU development, and the announcement indicates that this collaboration is broadening rather than narrowing. Specific technical aspects of the expanded program, such as die design, process nodes, performance targets, and deployment timelines, have not been made public.

      Nvidia serves as the implicit competitive benchmark for both elements of the Intel-Google partnership, having reported fourth-quarter 2025 revenue of $68.1 billion, reflecting a 73% year-on-year growth. Nvidia utilized its GTC 2026 conference in March to promote its full-stack platform as the standard environment for AI infrastructure. Intel is not trying to replace Nvidia’s GPU accelerators for training workloads; instead, it contends that the systems surrounding those accelerators — including the CPUs managing orchestration, the IPUs handling network and storage demands, and the interconnections that unify them — present

Google Cloud strengthens its partnership with Intel on AI infrastructure, focusing on Xeon processors and custom chips.

Other articles

Oracle names Hilary Maxson as CFO to oversee its $50 billion investment in AI data centers. Oracle names Hilary Maxson as CFO to oversee its $50 billion investment in AI data centers. Hilary Maxson, who previously served as group CFO at Schneider Electric, is joining Oracle as the company announces the reduction of 30,000 jobs and pledges $50 billion for AI data center development. Jassy indicates that Amazon's chip division could have a value of $50 billion and suggests that there might be plans to sell them externally. Jassy indicates that Amazon's chip division could have a value of $50 billion and suggests that there might be plans to sell them externally. In Andy Jassy's 2026 letter to shareholders, it is disclosed that Amazon's Graviton, Trainium, and Nitro chips produce over $20 billion annually, with the potential for direct sales to third parties. GIGABYTE has introduced a new OLED monitor. GIGABYTE has introduced a new OLED monitor. The company announced the launch of sales for the gaming monitor MO27Q2A ICE. The new model features a 27-inch QD-OLED panel in a white casing. Marceu Martins on creating AI and infrastructure systems for dependable large-scale performance. Marceu Martins on creating AI and infrastructure systems for dependable large-scale performance. Marceu Martins discusses the importance of 99.9% uptime, architectural discipline, and AI governance, emphasizing that failures in critical infrastructure are not acceptable. Oracle has named Hilary Maxson as CFO to oversee its $50 billion investment in AI data centers. Oracle has named Hilary Maxson as CFO to oversee its $50 billion investment in AI data centers. Hilary Maxson, who previously held the position of group CFO at Schneider Electric, has joined Oracle as the company reduces its workforce by 30,000 employees and pledges $50 billion towards the construction of AI data centers. Google Cloud strengthens its AI infrastructure collaboration with Intel, focusing on Xeon processors and custom chips. Google Cloud strengthens its AI infrastructure collaboration with Intel, focusing on Xeon processors and custom chips. Google Cloud is broadening its multi-year collaboration on AI infrastructure with Intel, pledging to use Xeon 6 CPUs and jointly developing custom Infrastructure Processing Units.

Google Cloud strengthens its partnership with Intel on AI infrastructure, focusing on Xeon processors and custom chips.

Google Cloud is enhancing its multi-year AI infrastructure collaboration with Intel by committing to Xeon 6 CPUs and jointly creating custom Infrastructure Processing Units.