Cedars-Sinai's AI outperforms expert models in interpreting heart scans.
EchoPrime, published in Nature in February 2026, surpasses both task-specific AI tools and prior foundational models across 23 cardiac benchmarks, with its code, weights, and demonstration publicly accessible.
An echocardiogram is among the most frequently used diagnostic methods in cardiology: this ultrasound of the heart displays its movement, the filling and emptying of its chambers, and whether its structure is impaired. Accurately interpreting an echocardiogram requires training, time, and a particular type of spatial attention—the capability to observe moving images of a beating heart and convert them into a clinical narrative.
Researchers at Cedars-Sinai Medical Center, collaborating with colleagues from Kaiser Permanente Northern California, Stanford Health Care, Beth Israel Deaconess Medical Center in Boston, and Chang Gung Memorial Hospital in Taiwan, have developed an AI system that replicates this process.
EchoPrime, a video-based vision-language model, analyzes echocardiogram footage and produces a written report detailing cardiac structure and function. The findings were documented in Nature (volume 650, pages 970-977) in February 2026, under the title “Comprehensive echocardiogram evaluation with view primed vision language AI.”
What distinguishes EchoPrime is the scale of its training. The model was trained on over 12 million echocardiography videos paired with cardiologists' written interpretations, sourced from 275,442 studies involving 108,913 patients at Cedars-Sinai.
No previous AI model for echocardiography has been trained with this amount of data.
What can it accomplish?
Tested in five international health systems, EchoPrime reached state-of-the-art performance on 23 varied benchmarks of cardiac structure and function, surpassing both task-specific AI models, which are designed for singular tasks like measuring ejection fraction, and earlier foundational models aimed at broader capabilities.
The outputs of the model are intended to support clinicians rather than replace them: it generates a verbal summary for cardiologists to review and act upon, rather than providing an independent diagnosis.
The research team has released the model’s code, weights, and a working demo to the public, reflecting a growing trend in AI research towards open access, which will allow other institutions to evaluate EchoPrime against their patient populations.
The surrounding context
EchoPrime is introduced in a year when AI misdiagnosis has been identified as one of the leading patient safety risks by ECRI, a healthcare safety organization. This backdrop enhances the expectations for EchoPrime, framing the standards it must meet.
The objective is not merely an AI that occasionally reads echocardiograms accurately, but one that does so consistently enough to alleviate the workload of cardiologists without creating new types of errors.
Cardiology has proven to be a fruitful area for AI-assisted diagnostics due to the structured and plentiful nature of the data, which includes ultrasound videos, electrocardiograms, and imaging.
The Cedars-Sinai project is arguably the most comprehensive effort thus far to transform this abundance of data into an all-encompassing tool. The transition of EchoPrime from a published model to large-scale clinical deployment may depend on various factors such as regulatory approval, institutional adoption, and liability—issues that the Nature article does not discuss.
Nevertheless, as a showcase of what is now technically achievable in cardiac AI, it establishes a new benchmark.
Other articles
Cedars-Sinai's AI outperforms expert models in interpreting heart scans.
EchoPrime AI from Cedars-Sinai was developed using 12 million echocardiogram videos and was featured in Nature.
