Google Cloud expands its partnership with Intel in AI infrastructure, focusing on Xeon processors and custom chips.

Google Cloud expands its partnership with Intel in AI infrastructure, focusing on Xeon processors and custom chips.

      In summary: Google Cloud and Intel have revealed an enhanced multi-year partnership focused on AI infrastructure, involving both CPU deployment and the co-development of custom chips. Google Cloud will continue to implement Intel’s Xeon 6 processors in its global infrastructure, specifically for C4 and N4 instances, while both companies will broaden their collaboration on Infrastructure Processing Units (IPUs) intended to relieve host CPUs of networking, storage, and security tasks in large-scale AI settings. This news comes shortly after Intel's stock rose around 33% and just two days following its agreement to be the foundry partner for Tesla’s Terafab megaproject.

      The partnership's central theme, as articulated by both firms, is that relying solely on GPU accelerators cannot adequately fulfill the requirements of contemporary AI infrastructure. Intel's CEO, Lip-Bu Tan, commented, “AI is transforming how infrastructure is designed and scaled. Effective AI scaling necessitates more than just accelerators; it calls for balanced systems. CPUs and IPUs are crucial for achieving the performance, efficiency, and flexibility that today’s AI workloads require.” This message is intentional as Intel has been shifting focus over the past two years from its previous dominance in general-purpose computing to a more specific viewpoint, suggesting that both CPUs and custom infrastructure silicon play a critical role in AI deployments that GPU-focused discussions have often overlooked.

      Amin Vahdat, Google's senior vice president and chief technologist for AI infrastructure, emphasized the importance of CPUs from the demand perspective. He stated, "CPUs and infrastructure acceleration are foundational to AI systems—from training orchestration to inference and deployment." Vahdat also highlighted Intel's longstanding partnership over nearly two decades and expressed confidence in meeting rising performance and efficiency demands with Intel’s Xeon roadmap. The characterization of this partnership as a long-term commitment to CPU development, rather than a simple procurement deal, indicates that Google has planned its infrastructure based on Intel’s forthcoming releases, including both the Xeon family and the joint IPU initiative.

      The CPU aspect of the partnership specifically involves Intel’s Xeon 6 processors, which Google Cloud employs for its workload-optimized C4 and N4 instance types. Google reports that C4 instances yield over 2.0 times the total cost of ownership advantage compared to earlier models, reflecting the improved performance and power efficiency attributed to the Xeon 6. The agreement also includes a commitment to multi-generational collaboration, indicating that Google’s infrastructure planning aligns with Intel’s future CPU offerings. Concurrently, Google is augmenting its commitments to custom silicon for accelerators, having provided Anthropic with about one gigawatt of TPU capacity via Broadcom, thereby securing Anthropic's AI infrastructure through 2027 and beyond—a strategy that highlights Google’s simultaneous expansion in both standard and custom silicon.

      The architecture context for CPUs is vital for understanding the timing of this public commitment. As AI workloads transition from GPU-heavy training phases to more distributed, latency-sensitive inference operations that run consistently across extensive server farms, the economic dynamics of AI infrastructure change. Inference demands ongoing CPU resources for orchestration, data pre-processing, and system management that training processes do not require. Google’s endorsement of the Xeon 6 for its C4 and N4 instances partly relies on the belief that CPU efficiency will become increasingly significant in the near future.

      The custom IPU initiative is another key aspect of the partnership, involving the collaborative development of Infrastructure Processing Units. These are programmable ASIC-based accelerators designed to manage networking, storage, and security tasks, allowing host CPUs to concentrate solely on application processing and AI workloads. In hyperscale settings, offloading these infrastructure functions to dedicated accelerators can greatly enhance utilization rates, energy efficiency, and consistent workload performance. Intel and Google’s collaboration on IPUs suggests an expansion in their efforts rather than a contraction. Specifics regarding the technical details of the enhanced program, including die design, process nodes, performance goals, and deployment timelines, have not been publicly disclosed.

      Nvidia, which achieved $68.1 billion in revenue for the fourth quarter of 2025 with a 73% year-on-year growth and promoted its full-stack platform as the default AI infrastructure at its GTC 2026 conference, serves as a competitive benchmark for both elements of the Intel-Google partnership. Intel is not attempting to replace Nvidia’s GPU accelerators for training tasks; it is advocating that efficiency improvements lie in the surrounding system — the CPUs orchestrating the workflow, the IPUs managing overhead, and the interconnected components that bring everything together. This perspective aligns closely with Google, which has the scale to practically validate these claims and motivations to mitigate dependency on a single vendor for accelerators.

      Intel's partnership with Google comes at a transformative moment for the company. Just two days prior, Intel became the primary foundry partner for Terafab, a $25 billion joint venture involving Tesla, SpaceX, and xAI, aimed at achieving one terawatt of annual AI compute, pled

Google Cloud expands its partnership with Intel in AI infrastructure, focusing on Xeon processors and custom chips.

Altri articoli

Google Cloud strengthens its partnership with Intel on AI infrastructure, focusing on Xeon processors and custom chips. Google Cloud strengthens its partnership with Intel on AI infrastructure, focusing on Xeon processors and custom chips. Google Cloud is enhancing its multi-year AI infrastructure collaboration with Intel by committing to Xeon 6 CPUs and jointly creating custom Infrastructure Processing Units. Coinspaid collaborates with The Residency to enhance the stablecoin framework for startups. Coinspaid collaborates with The Residency to enhance the stablecoin framework for startups. Coinspaid has joined forces with The Residency to provide early-stage founders with access to stablecoin infrastructure, cross-border payment solutions, and fintech-level settlement features. How AI is changing hospitality operations while maintaining the human experience How AI is changing hospitality operations while maintaining the human experience Arran Campolucci-Bordi discusses how AI-powered systems are transforming hospitality operations, enhancing efficiency while maintaining a focus on human interaction. Oracle has named Hilary Maxson as CFO to oversee its $50 billion investment in AI data centers. Oracle has named Hilary Maxson as CFO to oversee its $50 billion investment in AI data centers. Hilary Maxson, who previously held the position of group CFO at Schneider Electric, has joined Oracle as the company reduces its workforce by 30,000 employees and pledges $50 billion towards the construction of AI data centers. Experts have discovered a 49-day time bomb that may be hindering your Mac's performance. Experts have discovered a 49-day time bomb that may be hindering your Mac's performance. A recently discovered macOS bug functions like a time bomb, halting the internal network clock after 49.7 days and leading to failures in apps and websites. Marceu Martins discusses the design of AI and infrastructure systems aimed at ensuring reliability on a large scale. Marceu Martins discusses the design of AI and infrastructure systems aimed at ensuring reliability on a large scale. Marceu Martins discusses the significance of 99.9% uptime, adherence to architectural principles, and governance of AI, emphasizing their importance when failure in critical infrastructure is unacceptable.

Google Cloud expands its partnership with Intel in AI infrastructure, focusing on Xeon processors and custom chips.

Google Cloud is enhancing its multi-year AI infrastructure collaboration with Intel by committing to Xeon 6 CPUs and jointly developing custom Infrastructure Processing Units.