IBM and Arm have joined forces to operate AI software on mainframes, with no set date announced.
In summary: On April 2, 2026, IBM and Arm revealed a strategic partnership aimed at facilitating Arm-based software to operate on IBM Z and LinuxONE mainframes, which handle the majority of the world's regulated enterprise transactions. The collaboration focuses on three key areas: virtualization for hosting Arm software environments on IBM hardware, security and compliance in regulated sectors, and long-term ecosystem interoperability. The goal is to connect the Arm-native AI software stack, along with frameworks developed for cloud platforms like AWS, Google, and Microsoft, closer to enterprise data that IBM Z customers are unable to transfer to the public cloud. IBM did not specify a shipping date, and both companies referred to this collaboration as a vision for the future rather than existing products.
IBM and Arm are teaming up to bridge the gap between the most widely used AI software stack and the most critical enterprise hardware. The strategic collaboration announced on April 2, 2026, aims to allow Arm-based software to run on IBM Z and LinuxONE mainframes, foundational systems for transaction processing in banks, governments, and regulated enterprises that cannot simply transfer their data to the public cloud. This announcement indicates that the enterprise computing market has reached a stage where both architectures must coexist within a single system.
The challenge this partnership aims to address:
IBM Z and LinuxONE mainframes are based on IBM's s390x architecture. The AI and cloud-native software ecosystem, including PyTorch, TensorFlow, llama.cpp, and ONNX Runtime, has mainly been developed for x86 and increasingly for Arm. According to Arm’s estimates, nearly 50% of compute shipped to major hyperscalers in 2025 was Arm-based, with AWS Graviton, Google Axion, and Microsoft's AI infrastructure focused on Arm silicon. Arm has directly integrated its Kleidi AI libraries into PyTorch, ExecuTorch, ONNX Runtime, and other leading frameworks. Consequently, while there is a robust ecosystem of AI tools optimized for Arm, these require porting to function on s390x, a process that is time-consuming and costly, often lagging behind the pace of the main development efforts.
For enterprises reliant on IBM Z as their record-keeping system—processing transactions, storing customer data, and running compliance-sensitive workloads—this results in a growing divide. AI inference needs proximity to the data, which resides on the mainframe, yet the AI frameworks are designed for different architectures. The partnership aims to bridge this divide without forcing enterprises to choose between their current infrastructure and access to the latest AI software.
Three focal points, with one key disclaimer:
IBM and Arm have structured their collaboration around three primary work areas. The first is virtualization: creating tools that enable Arm-based software environments to operate on IBM Z and LinuxONE platforms without the necessity to port applications to s390x. The second is security and compliance: ensuring that Arm workloads on IBM hardware satisfy the data residency, encryption, and availability requirements of regulated industries such as banking, government, and healthcare. The third is long-term ecosystem interoperability: developing shared technology layers to provide enterprises more software choices across both platforms as the collaboration evolves.
A crucial disclaimer, clearly stated in IBM's press release, is that none of these developments are available yet. IBM noted: "While it's early days to share specifics, our intent is that the same features and qualities, such as security, performance, resilience, and cost-effectiveness, that distinguish IBM Z and LinuxONE will also be available to Arm64 workloads." No shipping date or technical specifications for the anticipated dual-architecture systems have been provided. The comments from both companies reflect their aspirations and intended direction, not products that can currently be purchased.
The hardware under discussion:
The announcement coincides with a hardware landscape IBM has been developing for several years. The IBM z17 mainframe, which became generally available in June 2025, is built around the Telum II processor, featuring eight cores operating at 5.5GHz, 360MB of L2 cache, and a 50% increase in AI inference throughput compared to its predecessor, the z16. IBM claims the z17 can perform more than 450 billion AI inference operations per day. The IBM Spyre Accelerator, which was commercially launched for z17 and LinuxONE 5 systems on October 28, 2025, provides 32 AI-optimized cores per card, with support for int8 and fp16 data types, and up to 1TB of memory across the system, with a maximum power consumption of 75W per card, intended to run large language models on-premises without the latency and data transfer costs associated with cloud-based inference.
The collaboration with Arm represents the software foundation being constructed atop this hardware investment. IBM has spent years engineering a mainframe capable of running AI at scale. The important question the partnership seeks to answer is whether the AI software that enterprises wish to deploy will be available for this system. Given the substantial investment in AI infrastructure in 2026
Other articles
IBM and Arm have joined forces to operate AI software on mainframes, with no set date announced.
IBM and Arm have revealed a partnership to execute Arm-based AI software on IBM Z mainframes through virtualization. A shipping date has not been provided, as this announcement serves as a roadmap rather than a product release.
